How we should worry about artificial intelligence
In June 1972, the New York Times ran an article with the headline, Man and Computer: Uneasy Allies of 25 Years. It told the story of 50 computer scientists gathered in Princeton for a 3-day symposium. They were there to celebrate the 25th anniversary of the birth of the modern computer. But they also took the opportunity (I hope between beers) to discuss how computers would impact the future of the human race. “Many of those who spoke displayed a fear that insofar as the computer simulates thinking, it threatens the primacy of man,” wrote Boyce Rensberger.
Fast forward to January 2015. A group of about 80 AI experts, tech entrepreneurs, economists, ethicists and lawyers gathered in Puerto Rico for a roughly 3-day symposium. They met to discuss the future of AI, its opportunities and challenges. Some expressed fears that AI would go rogue and threaten the primacy of man. Participants signed a letter pledging to invest in research that would help us understand the ways in which AI could malfunction and how we could forestall that bleak future. It’s got the ring of a sequel: Man and Computer: Uneasy Allies, the next 50 years.
As with any good sequel, the plot had thickened: The academics in 1972 were largely talking about applications in science and mathematics. AI has busted out of the ivy-lined ivory tower. It’s now in the hands of consumers. We interact with it every day. And so it behooves companies building these systems to make sure consumers feel comfortable: that they know they’re thinking about how AI will affect their privacy, their safety, their jobs, their very existence.
There’s still another plot twist. Last year, DeepMind — a startup Google recently snatched up — became a geek-household name after it demoed a program that taught itself how to play various Atari games better than humans, sometimes by playing in unpredictable ways. DeepMind’a system was a mixture of AI tools that can learn from experience. An engineer, says Bart Selman, an artificial intelligence researcher at Cornell University, just codes the learning procedure. The program then scans lots of data and develops its own strategy to accomplish its task — in the case of the DeepMind program, to become the best Atari player ever. The how isn’t entirely clear.
“These systems can learn new kinds of behaviors or new ways of doing things,” says Selman. “We’re building systems that have these abilities that go beyond what we can clearly understand.”
That uncertainty seems to be why people are taking to the streets (or at least Twitter) talking about how ‘AI will doom the human race.’ In theory, these programs could learn to do other things, like develop better-than-human stock trading skills, driving maneuvers, or cold-blooded execution tactics that make Skynet look like child’s play.
So are mercenary robots—or their renegade, disembodied deep-minded cousins—what we should be worried about right now? Not quite. First off, the technology isn’t there yet. “We’re probably decades, if not longer, from any general-purpose intelligent system,” said Eric Horvitz, the managing director of Microsoft Research and brains behind Stanford’s One Hundred Year Study on Artificial Intelligence (AI100), which focuses on some of the ethical issues surrounding AI. Because we’re a long ways off from AI and robots that have common sense and are good at many tasks, scientists have time to develop methods for better predicting if an AI system will start acting up in ways that harm humans or break human laws. In fact, they’re already starting to do research into algorithms that would put the breaks on bad AI.
So if robo-killers aren’t going to blow us to smithereens any time soon, what should we be worried about now? Here’s a few of the more probable scenarios:
Thanks for taking my job, robot.