Living with Robots, by Paul Dumouchel and Luisa Damiano (Harvard University Press), £23.95/$29.95
In Living With Robots, Dumouchel and Damiano investigate ways that social robotics can become enmeshed in the human social world. Social robots are designed to engage human social responses, using techniques such as human emotion detection, robotic displays of “facial” expressions, gestures, a wide variety of anthropomorphic features, and voice modulation to fulfill social roles, from companionship to therapy to caregiving. The authors draw on cognitive science, philosophy of mind, psychology, and computer science to better understand this emerging social realm consisting of a synthesis of human and robot activity.
Their argument is anchored in Descartes’ philosophy of mind, but in an unusual way. They caution against the binary framework of mind-versus-matter often attributed to Descartes and common to Western philosophy of mind. But they also argue that Descartes actually had a more sophisticated theory of cognition that can help us understand robot-human interactions.
Descartes distinguished between human beings and nonhuman animals by saying only human beings have minds and that animal behavior is explainable in mechanistic terms. But the authors argue that even on Descartes’ terms, this does not mean animals are incapable of cognition. Instead, both humans and non-human animals are capable of mechanistic cognition. Mental activity is a human-exclusive addition to a common basis: we are compounds of animal cognition and distinctively human mental activity. This means that the mechanistic cognition of social robots can interface with human cognition, without robots having to be just like humans or humans having to be mere organic robots.
The authors introduce readers to the field of artificial ethology, in which robots model and test theories of animal cognition. Robot models can help researchers understand how lobsters find mates via their sense of smell, or how crickets distinguish the calls of conspecifics. These robots do not reproduce complete artificial organisms. Instead they are designed to test the effectiveness of specific mechanisms hypothesized to be instrumental in these animals’ abilities in their species-typical environments. If a model succeeds and fails in the same ways and under the same circumstances as the living organisms, this suggests that the same mechanism may be at work in both organic and artificial entities. Rather than think of cognition as something strictly “between the ears” or as computational in nature, these mechanistic models, they argue, are useful for revealing processes of cognition as interactions between an embodied organism and its environment.
This interactionist picture matters because human cognition also involves interactions with environments, including social environments. They draw our attention to ways that human emotions involve feedback loops between social agents, loops into which artificial agents can be inserted. Rather than think of, say, fear as a purely internal subjective experience, we can see it as a response of the organism to features of its environment, a response that triggers further responses in turn and that can be undermined as well as supported by environmental features, such as whether other agents pick up on and mirror the fear, offer reassurance, or react with amusement.
This brings us back to robots, because social robotics, by definition, involves human social responses. In a detailed survey of current social robots, they show that social roboticists try for emotional engagement by building “external” features, designed to stimulate human emotional responses, or “internal” features, intended to model emotion’s role in motivating the entity. But this inside/outside distinction breaks down in practice. The sustainable ability to stimulate human emotional responsiveness requires that robots be moved by human emotional expressions and emote appropriately in turn, which requires not just that (for example) they smile or frown convincingly, but that they enter into emotional feedback loops and thus take human beings as features of their social landscapes, while also becoming features of human social landscapes.
Mature social robots must, they argue, be capable of “substituting” for human agents in various roles. This requires both responding to human cues and initiating interactions, resulting in novel interaction trajectories. It is not enough to function like automatic door-openers, reacting predictably when triggered. Rather, they will need to be like conversation partners, capable of both taking the lead and following the conversation where it leads. This, in turn, opens the door to “synthetic ethics”, an emerging ethics that arises from these interactions between humans and anticipated social robots.
If there is a weakness in the authors’ approach, it is in their continued reliance on Descartes. The book opens with a compelling comparison of different cultural engagements with robots, noting that European and American responses seem distinctively informed by a misreading of Descartes. It would be helpful to hear more about why this misreading took root so readily for western traditions versus others, and what we might learn from the philosophical frameworks of other cultural traditions, like those informing the Japanese conceptions of robots that they refer to at several points. And, of course, while they argue that Descartes is actually a pluralist about cognition, the age-old mind-body problem still raises its head after all: if human beings are mixes of mechanistic and nonmaterial thinking, then how does the internal interaction occur? But overall, their approach is both highly original and grounded in details of both human psychology and robotic enterprises. They engage with a variety of fascinating empirical work and detailed examples, keeping the often-speculative realm of robot ethics and robot-human cognition in touch with often surprising real-world developments.