Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, edited by Patrick Lin, Keith Abney, and Ryan Jenkins (Oxford), £30/$39.95
It might be that the rise of the robots is bad news for the world, but it’s pretty good for philosophers. Emergent technologies like artificial intelligence not only bring old ethical issues into steely relief, they raise questions we never before thought to ask. This volume will be a welcome answer to those who question whether philosophy or ethics could ever be relevant. Business and engineering decisions should always be made in the light of their potential ethical repercussions, but it’s particularly daunting to face the possibility of constructing moral agents whole cloth, without whatever psychological, biological or spiritual inheritances we naturally pass to our offspring as a result of aeons of evolution. This needs to be done carefully and deliberately, and it cannot abide the “move fast and break things” mentality that tends to dominate Silicon Valley.
The importance and timeliness of Robot Ethics 2.0 can be brought out by a thought experiment inspired by Stuart Russell. Imagine the human race is on the verge of meeting a super intelligent alien civilisation, but we can establish the code by which it would live and interact with us. The aliens will faithfully follow the code we outline, but we need to be specific – we can’t just stipulate that they be good to us since what being good to us entails is precisely what is at issue. If this were the case, we would make developing and discussing this code a top priority – ahead of immigration, budget reform or almost anything else. The editors of this book and the chapter authors recognise that this is the situation we’re in. They start with the well-publicised issues we face concerning driverless cars – should they choose to crash into the child in the street, or save the child by crashing into a pole and killing the passenger – but proceed to consider the possible future where robots are smarter than we are and leave us unable to discern whether they are moral agents or are even worthy of moral treatment.
Robot Ethics 2.0 is divided into four sections: responsibility and autonomous vehicles (AVs), human-robot interactions, robotics in sex and war, and the future of robot ethics. Questions addressed in the first section include who is responsible for choosing the ethical principles AVs should abide by: the driver, the manufacturer, the government? Whoever determines them, how should these “ethics settings” be designed in light of moral uncertainty about difficult cases? And who should be held liable when AVs cause harm?
The second section begins by examining the role of robots in caring for the elderly and providing social enrichment for autistic children. These applications can seem like clear, though slightly creepy, positives, but there is a real possibility that such relationships are based upon implicit deception. Is that permissible? Even if it is, there is the question of whether our inevitable trust will be misguided or our dependence on robots crippling. In her admirable contribution to this section, Kate Darling recognises that answers to these questions will require empirical work that helps us understand what factors influence human-robot interactions and how. It’s not likely that there’s a one-size-fits-all answer to general questions like “should we encourage anthropomorphic framing of robots?” If personalising robots leads soldiers to risk their lives to save their android buddies, or if it leads to an increase in conflict, that’s probably bad. But if it leads patients to be more comfortable and forgiving with their robotic caregivers, that might be for the best.
The third section attends to the areas where the use of robots is all but inevitable: sex and war. While the topics of robot prostitutes and android armies have been well mined in Hollywood, the articles in this section articulate the issues in a clear way. If robots are not conscious, how should we think of a robot ethics? In a philosophically rich piece, for example, Talbot, Jenkins and Purves argue that given robots are not really moral agents, a robot ethics ought to be consequentialist. But, they argue, that doesn’t mean that it would be permissible for humans, who are moral agents, to create such robots. In general, one of the most interesting issues raised in this section is the challenge of theorising about agents that lack consciousness. Of course it’s possible that robots will be conscious, but part of the ethical conundrum here is determining the right moral approach when we aren’t sure.
The fourth section considers the future of robot ethics, in particular the moral issues raised by future beings that might be our intellectual and moral equals. When do machines develop moral status? Could we even tell when they reach that point? Vallor and Bekey draw attention to the important fact that given the nature of deep learning, the technology responsible for many of the recent jumps in AI, we might not be able to reliably identify the rationale for AI decisions. We have reason to believe that despite its superiority, for example, a super intelligent AI will at times be wrong in ways a child should be able to spot. But how do we know when that is the case given the open possibility that the AI is seeing something we simply cannot see?
Robot ethics is a burgeoning and important field, and this collection shows just how interesting and important these issues are. Its authors are concerned but not alarmist. Common to them all is the sense that given what’s coming, we need to be sure that artificial intelligence augments instead of limits human freedom and autonomy. As Galliott notes, “[p]hilosophy is, of course, well suited to guiding us in this pursuit”.