Ex Machina is directed by Alex Garland and starts Alicia Vikander, Domhnall Gleeson, and Oscar Isaac
Teaching philosophy with the help of movies is a passion of mine, and three of my favourites have screenplays by Alex Garland – Never Let Me Go, 28 Days Later, and Sunshine. You could teach a lot about Kantian ethics with the first two movies, which depict people being treated as a means in disturbingly useful ways. Now Garland has both written and directed another movie a Kantian could love – Ex Machina, with Alicia Vikander, Dominic Gleeson, and Oscar Isaac. This time the individuals being treated as a means aren’t people, but humanoid robots, or AIs, as the movie calls them. Their exploitation only matters (to Kantians) if they’re conscious, self-aware, rational beings, but are they?
Philosophers of mind will salivate over this movie, at least initially. “Blue Book” – named after Wittgenstein’s Blue and Brown Books – is the Google-like search engine company where Caleb works as a programmer. After winning first prize in a contest run by the company’s extremely wealthy, eccentric founder, Nathan, Caleb is flown to a lush, remote island and enters the rich man’s stunning, hyper-modern, half-underground lair. As aggressive and menacing as Caleb is tender and guileless, Nathan spends his time working out, drinking heavily, and creating robots, or “AIs.” Caleb’s role here will be to put one of Nathan’s AIs through the Turing Test – a series of questions that will reveal whether the AI is in any way distinguishable from a human being. If indistinguishable, the AI passes the test. Nathan will get credit for creating not just “weak AI,” the two agree, but “strong AI.” Cool! A distinction originally made by philosopher John Searle comes to the Cineplex.
Nathan is clearly up to no good, as we suspect from the very start. Something nasty must be going on behind the locked doors of Nathan’s lovely high-tech dungeon, and it can’t be innocent that he has surveillance cameras aimed at Caleb at all times. It’s also non-endearing that Nathan appears to have a sex slave at his service. And the little dance session with the slave? Don’t trust Nathan, we think! That’s what Ava says too. Ava the AI – too bad it’s not Ada, after the computer pioneer – turns out to be extremely human-like, not only in the way she answers Caleb’s questions, but also in appearance. Her face is pretty and vulnerable and her body lithe and shapely, but man-made she certainly is, as we are constantly reminded by her transparent lower torso. She’s an AI for sure, but possibly a thinking, feeling AI.
What she’s thinking and feeling, apparently, is that she doesn’t want to be locked in the glass-walled room where she’s lived her brief life. Don’t trust Nathan, she tells Caleb, in the brief moments when power cuts kill the surveillance cameras. Ava’s entrapment is likened to Mary’s entrapment in the black and white room, in Frank Jackson’s famous paper “What Mary Didn’t Know.” (No, the screenplay doesn’t get an “A” for explaining the point of Mary correctly, but let’s not be pedantic. It’s fun hearing Mary even mentioned.) Caleb trusts Ava and decides to help her out.
In the ideal philosophy movie there’s a convergence of dramatic thrills and philosophical chills. For example, the dramatic horror of seeing teenagers used as organ banks to save others in Never Let Me Go coincides with and enhances puzzlement about why this is so terribly wrong. The near-rape scene at the end of 28 Days Later is a dramatic climax but also a philosophical climax, forcing us to wonder what becomes permissible when the whole future of humanity is at stake. In Ex Machina, I’m afraid, there is no such convergence. The dramatic climax of the movie comes with a philosophical let down. (Take my word for it if you plan on seeing the movie. I’m about to partially spoil the end.)
At the dramatic climax, we discover that things aren’t what they seem. Caleb’s role is not in fact to put Ava through the Turing Test. Rather, Nathan is testing out whether Ava can convince someone to help her escape. He hasn’t chosen Caleb for his brilliance (as Caleb had thought) but for his seducibility and guilelessness. This revelation is dramatically exciting but philosophically mystifying. Why does Nathan think the Escape Persuasion Test is significantly different from the Turing Test? And even harder to answer, why should we suppose that the Turing Test needs a brilliant and sceptical tester, but the Escape Persuasion Test needs easy prey like Caleb?
The dramatic climax of the movie doesn’t offer any delicious food for thought, but along the way there are tasty bites. May we treat AIs solely as a means? Is there something wrong with a guy like Nathan if (as we eventually find out) he creates sex robots for his personal use? Do robots have a right to life? Like Alex Garland’s earlier movies, this one’s best questions are in the realm of ethics.