Artificial You: AI and the Future of Your Mind, by Susan Schneider (Princeton University Press), £22.00/$24.95
Susan Schneider’s Artificial You: AI and the Future of Your Mind opens in 2045 at the “Center for Mind Design” where you can purchase brain enhancements such as “Human Calculator” (for extraordinary math abilities), “Zen Garden” (to experience deep meditative states), “Hive Mind” (to know the thoughts of others) and so on. While these options are likely to be enticing, Schneider wonders if at some stage the result of such enhancements ceases to be you.
Next, at the “Android Shop” you have the option to choose highly intelligent humanoids with varying superhuman abilities to help around the house. But such androids – it’s worth considering – could one day become conscious. If so, forcing them to clean your house might be on par with slavery.
This fictional shopping spree sets the stage for Schneider’s accessible and intriguing discussion of two under-appreciated issues in the ethics of artificial intelligence: machine consciousness and the effects of enhancement on personal identity.
In chapter two, Schneider adapts David Chalmers’ “hard problem of consciousness” to artificial intelligence in the form of the “problem of AI consciousness” (i.e., will there be anything it’s like to be a sophisticated artificial intelligence?), then considers two opposing views: “biological naturalism” (most often associated with John Searle) and “techno-optimism” (often associated with computationalism). While the former claims that consciousness requires biological processes and substrates, the latter claims that the creation of artificial general intelligence (i.e., machines that can accomplish all of the same intellectual feats as humans) will yield AI consciousness. Unsatisfied with the support for either view, Schneider stakes out a “middle-of-the-road” position which she calls the “Wait and See Approach”. In short, while it might turn out that machine consciousness requires certain substrates or architectures, this can’t be determined a priori. We’ll need an empirical test for determining whether any particular machine is conscious. She proposes three such tests in chapter four.
Before doing so she considers several scenarios both for and against machine consciousness in an attempt to motivate the “Wait and See Approach”. I found one such scenario, which she labels “outmoding consciousness”, particularly interesting, even if rather speculative. Because conscious attention is only required when learning news tasks, such as driving, one might wonder whether a superintelligent AI that already possesses mastery of all intellectual tasks would need any such capacity. Just as performing tasks that are routine becomes unconscious for humans, most if not all tasks could be routine for superhuman AI. As a result, there may be no need for deliberate, conscious attention, and possibly phenomenal consciousness altogether. If so, efficiency considerations might lead to the eventual “outmoding” of consciousness. And while Schneider repeatedly raises the connection between consciousness and being worthy of moral and legal rights, I wonder whether beings of such superior intelligence, despite having no flicker of consciousness, might still be worthy of some level of moral consideration.
In chapter four Schneider proposes her “AI Consciousness Test” (or, ACT) which is analogous to the Turing test as it’s based on responses to questions, but distinct as it aims to discover an elusive “internal” property of the machine mind (i.e., whether there’s something it’s like to be that mind). The test itself involves an open-ended set of questions meant to determine if the machine is truly conscious rather than merely responding correctly while remaining a philosophical zombie. For example, we might ask whether it contemplates an afterlife, how it would handle questions about “body swapping”, or whether it prefers certain events to others in its future. And while I question what, exactly, answers to such questions would reveal, there’s something to be said for attempting to face such an imposing task as developing questions for determining whether a machine is conscious. On the other hand, Schneider does claim that an AI that passes such a test “should be regarded as conscious and be extended appropriate legal protections”. While it’s not clear to me that such an AI should be regarded as conscious, I do agree that when it comes to ethical consideration we should err on the side of caution.
A second test – “the chip test” – involves a hypothetical scenario where you develop a tumour in a part of your brain thought to be responsible for consciousness. A company, “iBrain”, develops a microchip meant to be functionally isomorphic to that part of your brain and will replace the failing biological section with this microchip, all while you’re awake so that you can report any changes in your conscious experience. If you report relevant changes then that would indicate that the chip is either not made of the right stuff or isn’t functionally organised in the right ways. While individual failures would reveal little, failure across the range of possible substrates and functional organisation would, according to Schneider, suggest that AI can’t be conscious. On the other hand, success in any particular case could indicate that such a chip is, in fact, composed of the right stuff and that machines composed of such chips, arranged appropriately, could possibly be conscious.
At the conclusion of chapter four, Schneider concedes that even if we assume that we discover chips capable of replacing parts of the brain without the “lights” going out, there remain reasons to avoid attempting to enhance ourselves in radical ways. This brings us to the second major topic of the book: the thorny question of personal identity.
After a brief overview of the major positions on the issue, Schneider focuses on what she calls “patternism”: a combination of computationalism regarding the mind and the psychological continuity approach to personal identity over time. In short, what makes me the same “person” over time is the patterns given by the synaptic connections that underlie my personality and memories as well as my attitudinal and emotional dispositions. This is also the view which, according to Schneider, is often favoured by transhumanists, or those who are optimistic about human enhancement and even our merging with AI.
Schneider raises “reduplication” worries of the sort common in the personal identity literature to call into question the likelihood that enhancements will be improvements of the self rather than creating a new, distinct individual. In particular, she considers a thought experiment in which a fatally ill person is given the opportunity to have their body duplicated and brain uploaded with all its patternist details. Schneider claims that though a duplicate has been created, it will be distinct from the original person. And although a “modified patternism”, which includes spatiotemporal continuity, yields a more realistic position, it inevitably fails to provide a principled line between uploading (which wouldn’t allow for survival) and gradual, minor enhancements that presumably would. It’s here that the issues of both enhancement and personal identity run into some of the same worries.
Chapter 7 involves an apparent digression into extra-terrestrial intelligence. Here Schneider argues for the “postbiological cosmos” approach which claims that “most intelligent alien civilisations will be superintelligent AIs”. The short version of the argument goes as follows: [a] we’ve seen with our own civilisation that the transition from pre-industrial to “postbiological” can occur within a few hundred years; [b] there are many planets older than Earth and if intelligence evolved in even some of them, then it’s likely that there are cases of intelligence far older than ours; and finally, [c] such “older” intelligences are likely to have advanced far beyond ours and, just as intelligence on Earth appears to be heading for, unlikely to be dominated by biological intelligence.
The remainder of the chapter speculates about the possible natures of extra-terrestrial superintelligent AI. And while the possibilities discussed are interesting, the fact that “superintelligent” beings are likely to be beyond our comprehension makes it difficult to see why the possibilities discussed are more likely than any others. To her credit, Schneider concedes this point at the end of the chapter.
The final chapter returns to the “patternist” approach often favoured by transhumanists and examines the related view of the mind as software. She rejects the view on the grounds that software programs are abstract whereas minds cause actions in the spatiotemporal world. She then proposes a modified version of the view in the form of “mind as the instantiation of a program”. For Schneider, this is an improvement as instantiations of software are physical, causal entities. But she claims that this “mind as instantiation” approach amounts to the “modified patternist” view (patternism together with spatiotemporal continuity), which again raises the same problems regarding enhancement and survival mentioned earlier.
She concludes by suggesting a cautious approach to enhancement and the need for science, philosophy and society as a whole to take a position of “metaphysical humility” before hurtling headlong into enhancements that, though they might yield greater intelligence and improved capacities, might also be inconsistent with the survival of the individuals involved.
While Bostrom’s Superintelligence brought the existential risks of AI into focus and Kurzweil’s The Singularity is Near speculated about the wondrous future that technological progress portends, Schneider’s Artificial You is a significant contribution to some of the often-overlooked ethical issues surrounding the continued development of artificial intelligence.