Superintelligence by Nick Bostrom (Oxford University Press, 2013)
Truly Human Enhancement: A Philosophical Defense of Limits by Nicholas Agar (The MIT Press, 2013)
Computers are scary, aren’t they? With their tiny little camera eyes, and their IEEE ports. Does anybody even know what an IEEE port does? Mind-reading, probably. Or death-rays. One of the two. Whatever the hell it is, I don’t like it. Down with Skynet, and all that.
But even worse than the HALs, and the Terminators, and the Red Queens, are the super-humans. Humans who are better than sad-sack norms like you and me. The Übermensch (that’s German for Übermensch). Once scientists start enhancing good ol’ natural human abilities, with laser eyes, and super-strength and whatnot, the rest of us are toast. And not toast in a good way, like the band. We’ll be rendered obsolete – if God, angered by our technological hubris, doesn’t do for us first.
But which is scarier? Super humans or super computers? Who would win in a fight? Who would have the better dance routine? These truly, are the questions of our time.
We’re lucky, therefore, that the publishing gods have offered up two recent publications that deal with precisely these issues. And by “precisely,” I do of course mean, “almost precisely,” which is to say “roughly.” Which is to say, they don’t talk about dance routines. Or fights. But they do talk about super humans and super computers. Kind of.
The general gist of Nick Bostrom’s Superintelligence is that we’re right to be worried about AI – but for all the wrong reasons. The kind of super computers our descendants may create might not be evil, just really, really dedicated. As Tom Chivers puts it, in the Telegraph:
“Bostrom imagines creating a machine with a superficially harmless final goal: to calculate pi to as many digits as possible, for instance, or to make steel paper-clips. How can that be dangerous? But – without further instructions – the machine might end up turning the entire solar system and all the ones around it into computational material to allow it to calculate pi ever faster, or smashing the Earth to pieces and turning it all into paper-clips.”
Like I said: S-C-A-R-Y. Paperclips are scary.
You know what else is scary? The possibility that the robots have already taken over! That’ll put the wind up you and no mistake. Have you ever seen the Matrix?! That film is deep. What about Blade Runner? What if hyper-smart robots can already disguise themselves as humans – how would we be able to tell? What if you’re a robot? What if Nick Bostrom is a robot? That at least would explain the techno-jargon he uses in his book – which Clive Cookson (in The Financial Times) describes as “opaque,” and which Chivers describes as “a damn hard read.” Hard to read? Definitely a robot.
Reading the reviews of Bostrom’s book makes me wish there were a way for me to fight these super-computers. If only I was smarter, and faster, and stronger, and had laser eyes. If only I was Enhanced.
But hang on – apparently human enhancement is a bit iffy too. That, at least, seems to be Nicholas Agar’s view in his recent Truly Human Enhancement. The excellently named “James Storm,” in BioNews.org, says that Agar “has endeared himself to the enhancement debate by being a little bit afraid of … the real-life supermen who could very well use their powers for evil and enslave the rest of humanity.” Which, to be honest, seems a fully justified fear.
Agar isn’t totally against enhancement – as Steven Rose points out, in The Guardian: “He supports [it] – in moderation. That is, anything goes that lies within the ‘natural’ human range. Life to 120, sure, but to 500, no. Such radical enhancement would produce a race of ‘post-people’.” Rose helpfully adds that it’s not clear exactly the “natural” range should be defined – but more importantly, he worries that Agar’s work extends too far into the merely speculative to be able to offer any convincing arguments. This is a concern shared also by Leonard M. Fleck, in the Notre Dame Philosophical Review:
“[Radical enhancement] is much too vague and speculative to provide solid footing for making moral judgments about whether or not society ought to permit research aimed at achieving any of the sorts of radical enhancement discussed by Agar … Perhaps moral philosophers should leave such speculation to science fiction writers …”
Perhaps. Or perhaps scientists should get to work building me some laser eyes. I’m not going to be caught with my trousers down when the super-computers come to turn me into paperclips.