AI ethics is very much alive. But is it alive and well? The recent progress in AI has given rise to warnings by the likes of Stephen Hawking, Elon Musk, Nick Bostrom, and others, of the dangers that AI might pose to humanity. These prophecies of doom helped encourage the attempt to find solutions to the ethical issues that AI, in all its varied forms, might present. In 2016, I helped to organise a seminar on ethics in AI at a major international AI conference in New York, the International Joint Conferences on Artificial Intelligence (IJCAI), to a keen but relatively modest turnout. Eighteen months later, at the annual conference for the Association for the Advancement of Artificial Intelligence in New Orleans, a two-day parallel conference on ethical issues in AI had a very large, packed hall at which there was standing room only. This burst in interest shows no sign of abating. The last time I looked, which was several months ago, there were 53 sets of guidelines or regulations worldwide on aspects of the ethics of artificial intelligence. This interest in the ethics of AI crosses disciplinary boundaries. So, what might be the role for philosophy? And what might philosophers do better?
But first, what exactly are the ethical questions that AI poses for us? It’s important to note that the precise definition of AI is somewhat disputed. It’s commonly noted that a certain aspect of computing might be heralded as AI, but when it arrives, we get quickly used to it and say, “Oh that’s not really AI”. After all, AI is a tool, and using and making tools is just something that humans do. So why the fuss? Are there indeed, shiny new, and robustly alarming, ethical questions? As a way of trying to capture part of what it is that exercises us about some of the central ethical questions in AI, my rough hunch is that we tend to think of something as AI (as opposed to merely really clever computing) when it seems, as it were, to be trespassing on something that we’d considered to be pretty special about us humans.
AI is not just a tool for extending our physical capacities (we are used to the idea that we are, as apes, relatively puny); nor just for doing rote calculations spectacularly fast (they are useful, but a bit mundane and we are pleased, rather than agitated, that machines can do these). AI stands accused of trespassing on our view of ourselves as, roughly speaking, especially brainy, purposive agents, with some special licence to meddle with the universe, for better or for worse.
We are most troubled by forms of AI when they seem to threaten to grow beyond being a mere tool that we can control: something that has intelligence, of a sort, and then may start to control us. The capacities of AI to make decisions, and to act, and to learn without specific guidance seem to amount to autonomy; moreover, there are aspects of machine intelligence which we humans, its makers, may not fully understand. Norms of ethics for professionals using the tools of their trade rest upon an ideal of full understanding and full control over these tools. It’s true that in many areas we may not always have as much control as we hoped, which is perhaps particularly apparent right now in medicine. But with AI, these issues with control may look more and more problematic. If we use AI to perform some characteristically human tasks for us, will our own humanity be enhanced, or eroded?
And the control problem gets worse. For who controls the dubiously controllable? Technological power is concentrated in a very few hands, and we are arguably already living in a partial plutocracy. The coronavirus lockdown only makes this more apparent. I write this in isolation; apart from the view from my window, virtually all the information I can get about the world right now is mediated by software and platforms produced and managed by an alarmingly small number of people. Those worrying that in the future, a superintelligence might arise and take control are worrying in the wrong place: we’ve already lost much control. Vast technological power, vast wealth, and control over information, are not a very promising mix.
These issues are closely linked to the question of transparency – of how far we can actually understand the AI that has been produced. In some instances, AI’s “black box” nature means that it may be hard to know exactly how the AI reached its conclusions (there are technical arguments about how far this is the case for different forms of AI). But even if we could write out in full the steps some AI took, this might be so complicated that it would not really help us understand. Yet, when our agency has impacts upon others, we need to be accountable, which involves explaining what we did, and why. And when we claim knowledge about something, we also need to understand the basis for this claim; since our moral judgements are of necessity based upon our grasp of the world, this epistemological issue is at once an ethical issue.
Our technological world also seizes our attention. There is a deliberate attempt to grab our attention built into the design of many applications, often cynically made as addictive as possible, and even this aside, the algorithms that produce and manage the information that is presented to us pull our attention in certain directions. This will directly impact upon how we spot ethical questions, as well as how we address them, especially given the fast pace of change, and the embeddedness of much of this technology in how we relate to the world, to others, and indeed to how we understand ourselves. There’s a danger that the more we use such technology, the more every problem will look like a technological problem needing a technological solution. And the more the values embedded in technology – such as speed, efficiency, scale – may be conflated with human values having more general application.
For example, AI4People’s Ethical Framework for a Good AI Society states, “AI is providing a growing reservoir of ‘smart agency’. Put at the service of human intelligence, such a resource can hugely enhance human agency. We can do more, better, and faster, thanks to the support provided by AI.” It goes on to say, “The larger the number of people who will enjoy the opportunities and benefits of such a reservoir of smart agency ‘on tap’, the better our societies will be.” But for many central human endeavours, it’s debateable whether “more” and “faster” is necessarily or always better. This needs to be pointed out explicitly.
So, plenty of work for philosophy in AI ethics then. But how’s it going? How well is philosophy communicating with other disciplines here?
Despite, or perhaps because of, the plethora of work in this area, there’s an unfortunate stumbling block for work in applied ethics: in many circles, the whole idea of “ethics” per se has gained a poor reputation for irrelevance, and for prohibitions and pronouncements that misunderstand other subject areas; years ago at a conference on social, legal, and ethical aspects of genetics the audience took to guffawing with disdainful laughter each time anyone mentioned the word “bioethics”. We philosophers must not let this happen in AI. Sometimes ethics is explicitly scorned, even by those who are in the very act of discussing ethics. For instance, Virginia Dignum’s recent and well-received book, Responsible Artificial Intelligence (Springer, 2019), discusses ethics at length, but considers that what is needed is not ethics, but responsibility, for “…while with Ethics, it suffices to observe what happens, Responsible AI demands action.” There are many more infelicities regarding ethics from the computer scientist Dignum, despite her leading and often valuable work in this field. I feel it must be at least partly the fault of philosophy, and a failure of interdisciplinary understanding, that such statements can get made.
There’s a discernible current of thought that we’ve now sorted out the main ethical questions in AI – a common rough-and-ready list might be agency, safety, privacy, transparency, diversity, wellbeing, accountability – so now we just have to implement these technologically. Of course, technological implementation is critical – what use to talk about transparency if we can’t achieve something at least approaching it in practice, or at least work out its limits – but it’s vital that the need for continuing philosophical analysis and dialogue about the underlying values and concepts continues.
I suspect that some of the problems with sustaining a good interdisciplinary dialogue are the fault of how philosophy has sometimes come to be applied in practical ethics. There is a certain irony. The ethics of AI is seen to present new moral problems. But work often proceeds by helping itself to some tired old frameworks that were found left in the back of the van, often borrowed from bioethics: for example, citing the values of autonomy, beneficence, non-maleficence, and justice, bolting on upgrades as needed. But my own view is that it will take deeper philosophical work to address the questions that AI presents to us.
Take autonomy. It raises enough conceptual difficulties in biomedicine; in AI, it’s one of the central questions. If we lose full control over AI, if we can’t explain our decisions that use AI, and so on, our autonomy is in question. Does our use of AI enhance our agency, or reduce it? But how might the very relationships we have with machines alter our own autonomy, and how might this matter, for good or for ill? Naturally, we want discussions of autonomy and AI to connect to previous work on autonomy from other areas. But it is unwise to prejudge how radically deep our explorations of this topic might go.
Another well-trodden strategy brought out and dusted off for the occasion is to explain that ethical theories can be divided into three main groups: consequentialist, deontological, or virtue ethics approaches; and to proceed from here. This catastrophically appalling strategy is hopeless when it comes to AI. I must of necessity be extremely brief, and I know many will resist my complaints, but there’s no space here for anything other than an indication of the many issues. The problems occur when this is used simplistically, as so often is the case.
Take one problem. In essence, consequentialism cares not how something is brought about, or by whom, only that the best outcome occurs. But one of the main questions about our use of AI is precisely the question: does it matter if a human does it, or if a machine does it? Does it matter if we can’t explain how a result occurred, so long as it’s the optimal result? Don’t look to a dyed-in-the-wool consequentialist for an answer. They are just interested in overall wellbeing.
But when it comes to a future disrupted by AI, don’t hold your breath waiting for an answer to the question of wellbeing either. Working out if we are better off with technology which may well be making major changes to how our brains develop, to how we process information, how we perceive ourselves and our neighbours, how we form and satisfy desires, involves complex conceptual and empirical questions. Okay, hang onto much of consequentialism, if you really must, but it needs to be seriously supplemented.
Deep philosophical questions about the nature of ethics also abound in discussions of AI and ethics because the nature of much AI means that its reach is international. It’s pretty well assumed by those working in the field that ethics must apply universally, and all individuals are to be treated equally and without discrimination. But it’s often also stated that “everyone has their own values”, and “we must account for a plurality of values”. Hence an often weird amalgam arises: the attempt to produce a universal set of ethics which encompasses a plurality of values. Perhaps this can be done, but the discussion, the links between theory and practice, have to be better.
Ethical theories tend to make certain assumptions about our natures and the nature of our world – for instance, about competition for resources, the unpredictability of life, the nature of our desires, about our social natures, our tendencies for moral weakness, and so on, which together give rise to the shape of our ethical lives. But our use of AI may be altering these conditions – especially uses which are heralded as human enhancement, or projected futures where the need for toil and struggle which has characterised much of human life is radically altered. So it’s such foundational questions in ethics that need to be examined head on, to revivify the blight of those badly attenuated accounts of consequentialism, deontology, and virtue ethics.
I blame philosophy itself for much of this. There is, in practice and in interdisciplinary dialogue, too little discussion and work on the hard and complex task of applying ethical theory to practice. If it’s there, it’s not working well enough. And a division between metaethics and normative ethics has been drawn in a way that has effectively split off from much practical discussion the underlying metaethical questions that are critical to understanding the ethics of AI. To answer whether it’s okay to use AI to help us make moral decisions, or to make such decisions for us, we need to drill down into questions such as that of the nature of moral agency, among others. To consider how we even discover and frame the ethical questions that AI presents to us, especially given how much technology is already driving our attention, we need to ask questions of moral epistemology. Etc. Etc.
It also follows that the ethical questions of AI need to link to other areas of philosophy, such as philosophy of mind. What this all gets down to is that, in dialogue with other disciplines about the ethics of AI, we don’t need just ethics, we need philosophy. And we need philosophy that is not simply taught or presented in separate modules; we must find a balance between encouraging understanding of philosophical approaches to ethics, and selling an over simplified flat-pack self-assembly kit version of what it is to “do” ethics.
I’ve noticed in a few quarters within AI ethics an assumption that now we can “get the ethics right”; we can use empirical means to find out what common ethical values there are, we can programme machines to think ethically, we can finally fix things. But even if true (which it isn’t, by the way), ethics is not just about getting the right answer – it demands that we are answerable to others, that we explain ourselves to them, that we listen to their response. It demands that we continue to question if our ethical decisions are right. Likewise, to address concerns about transparency, control, and other issues with AI, a dialogue, an accountability to others is essential. The answer to the question of whether or not AI might be detracting from human thought and agency, detracting from valuable relationships with fellow humans, is to exercise that thought and agency, to relate meaningfully to each other. The complex process of finding answers to the ethical questions that AI brings, is in itself, a major part of the solution to the fear that our use of AI might erode, rather than enhance, our humanity.