'Ex Machina' is the best movie about artificial intelligence in 40 years
In the future, how will we tell if a robot has human-level intelligence?
For decades, the litmus test of choice was the Turing Test, which asks: can a computer program fool one in three judges into thinking it’s human? But the Turing Test says nothing about a program’s ability to reason, or be creative or aware. It’s essentially an exercise in deception, so scientists have started devising other metrics to measure artificial intelligence.
- Bernie Sanders and Some Democrats Get Ready to Lick Elon’s Boots and Practice the Politics of the Past
- Nancy Mace Is an Irredeemable Garbage Person Who Loves Bullying Vulnerable People and Yet the Media Still Believes Her
- NBC Seems to Suggest a Children's Video Game is to Blame for UnitedHealthcare CEO's Killing
Writer/director Alex Garland’s new sci-fi film “Ex Machina,” in theatres starting today, explores the real meaning of intelligence and consciousness. Not since Stanley Kubrick’s masterpiece 2001: A Space Odyssey has a film about AI been this good. “Ex Machina,” which has more than a few “2001” references, is worth seeing, especially if you’ve got any interest in exploring what it means to be a thinking, feeling human. (Warning: spoilers ahead.)
The film follows tech billionaire Nathan and Caleb, one of his employees at BlueBook (the world’s largest Internet search company) as they spend time with Nathan’s latest creation, a fembot named Ava. “The challenge is to show you that she’s a robot, and see if you still feel she has consciousness,” Nathan tells him.
The key word there is consciousness. Nathan’s test bypasses the Turing Test’s dry, scientific is-she-isn’t-she human tactics, and tries to go straight to the core of how we’ll assess our robot pals of the future: their ability to emote like us.
Mimicking and understanding emotions is one of the hardest problems facing AI researchers and roboticists. It’s easy to make a robot do math problems or get you directions to the store; it’s another thing entirely to make that robot empathetic, or capable of detecting anger in humans.
Like the machine-learning systems being built by Facebook, Google and Snapchat, Ava, the robot star of Ex Machina, is built on the foundation of a huge trove of data—in her case, billions of webpages indexed by BlueBook, a fictional search engine. Ava uses that data to figure out that certain actions—looking longingly into another’s eyes, for example—will elicit certain emotions from the people she’s physically interacting with. It’s unclear whether she can generate emotions herself, but she knows how to provoke emotion in humans.
As a result, Ava is better than any of today’s robots at the basic skills of human interaction. She tries to touch Caleb through the glass that separates them. She leans into him, makes eyes at him. She “understands” that haptic feedback is essential to getting Nathan to trust and empathize with her. Through her physical body, she manifests signs of intelligence: human-level motor skills; creativity, and social awareness. (Incidentally, these are the same qualities scientists trying to come up with a more informative Turing Test are going after.)
Many AI researchers haven’t had time to see and digest Ex Machina yet, but I suspect they’ll be pleased with the movie’s depiction of their field. Bart Selman, an artificial intelligence researcher at Cornell University, told me that an emotion-sensing bot like Ava could well fool humans into believing she was human, too.
“It seems people are quite easily ‘fooled’ about what they think other people think or feel,” he said. “So, they may be similarly fooled by a machine. When people anthropomorphize pets, they also display an eagerness to assign human qualities to non-humans. We seem to have that tendency.”
Even if robots could recognize our emotions, though, it wouldn’t make them fully indistinguishable from humans. In Ava’s case she’s just a fantastic pattern-recognition machine—a more sophisticated version of the AI that powers Google and Facebook today.
Physicality is another strong theme running throughout Ex Machina. The film asks us over and over again: do you need a body in order to be conscious? Murray Shanahan, a roboticist and one of Garland’s scientific advisors for the film, says that’s unclear.
“In theory an AI could learn a great deal about the physicality of ordinary objects by processing vast amounts of video data…Ava is a little bit like that,” he told me. “The reason for being embodied is to acquire the kinds of skills we have to interact with our rich physical environment. Sophisticated cognition rests on our ability to wrestle with complex bodies in our world.”
In other words, our physical relationship to things around us is just as crucial to human intelligence as our ability to hold a conversation.
Another question raised by Ex Machina is the moral question: how do you give a robot a sense of right and wrong?
Many of researchers I’ve talked to about the evolution of AI bring up the point that we have to figure out how to code morality into our algorithms, otherwise they’ll break laws and turn on us. “When we think about our future, it is vital that we try to understand how to make robots a force for good rather than evil,” wrote cognitive scientist Gary Marcus in the New Yorker last month about Chappie, another recent AI movie.
I agree. But what I love about Ex Machina is that it makes it clear that these moral decisions aren’t black-and-white. Nathan’s fembots actually do turn out to be malicious—they turn on him, and on Caleb. But we’re not told, explicitly, that their actions are immoral. After all, Ava was imprisoned by her creator. Was her reaction simply self-defense? If we ever develop sentient robots like Ava, will we have the right to hold them against their will? If they revolt, can we turn them off with moral impunity?
These are the kinds of questions that obsess AI reseachers. “We’ll have to come up with new rights and norms; human rights won’t do,” University of Washington law professor Ryan Calo, who specializes in cyberlaw and robotics, told me in an e-mail.
At the end of the movie, Ava escapes, and we learn that’s she’s going to walk amongst us, masquerading as a human. I couldn’t help but cheer for her, but a flurry of questions zipped through my head. Can robots like her be trusted? Is she going to go rogue? Does she really feel? After all, the movie tries to persuade us that consciousness, intelligence and feeling all go together. But they’re not; they’re individual problems, all of which will need to be solved by roboticists as the dawn of true AI approaches.
The last image we see of Ava is her shadow, and that seems fitting. The future of AI is still murky, and as we try to figure it out, it’ll raise important questions about our own intelligence and how we define humanity.
Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.