What Termites Teach Us About Robot Cooperation
At a glance, a single worker of the genus Macrotermes is not a very complex creature—less than half an inch long, eyeless, wingless, with an abdomen so transparent you can spot the dead grass it ate for lunch. Put it in a group, though, and it may pile up pinhead-sized balls of mud, one after the other, until a complex mound takes shape. By the time that mound is 17 feet tall, it will be equivalent in scale to the Burj Khalifa. In its basement sits a symbiotic fungus, which digests grass for the nest and requires continuous care from the workers.
Although termites build without the benefit of architects or engineers, their mounds are ingeniously constructed, using cues known only to the bugs. In fields in Namibia, the structures angle gently north, tracking the sun at this latitude. They are not so much invertebrate apartment buildings as solar diagrams, written in dirt, with termites as the calculating agents.
Since 2011, a team of roboticists from Harvard’s Wyss Institute, led by Radhika Nagpal, has been making regular visits to Namibia in hopes of uncovering how such local signals as humidity, pheromones, and termite behavior contribute to the global reality of the mounds. In 2012, I watched them video the insects in elaborate little sets constructed in soil, plaster, and plexiglass. The team’s goal, essentially, was to find the machine in the bug. Nagpal and her colleagues assumed that the termites could be modeled as stochastic automata—memoryless mini robots whose actions were driven by probability rather than intent. Extracting data on the animals’ behavior, they believed, would help them design algorithms for the autonomous construction robots they planned to develop back home.
After two weeks of tests, the researchers had failed to gather the information they were looking for. The bugs were ciphers: Sometimes they did nothing under the video cameras; sometimes they formed a whirling ball of termites; sometimes—but often when the cameras were off—they built furiously. Kirstin Petersen, a PhD candidate at the time, set about determining why. At the lab in Cambridge, Massachusetts, she devised a tracker that could follow and analyze individuals in a group—something off-the-shelf trackers could not do. (Scientists often paint ants in order to track them, but termites groom the paint off.)
Petersen was surprised to observe that the termites were not like robots at all. They were individuals, each one a quirky character. Some were leaders who appeared to “trigger” others to make little piles of dirt balls, a few were workaholics, and many were the insect version of Bill Murray characters—slackers, really—who did little more than take an occasional trot around the petri dish.
Looking back, Petersen said, the team’s initial approach was “laughable.” “When I build two robots, I know the two are not the same,” she told me. “Even if termites were perfect robots, there would be fluctuations.” Petersen, who is now at Cornell’s Collective Embodied Intelligence Lab, said that her termite insights made her interested in creating crowds of social robots, in which a relatively dumb horde follows a few perceptually gifted leaders.
The sociality of robots could upend our expectations of machines in the same way that the very idea of slacker termites has boggled our simplistic ideas about nature. We imagine that the future of autonomous swarms is machinelike perfection and greater control, but moments of unpredictable, Three Stooges–like chaos are also likely to emerge. In the mess, there is meaning that termites—but not yet humans—can comprehend.