Who lives, who dies, who decides when a self-driving car must kill?
As a general rule, people don’t want to die. And they usually don’t want other people to die. But because life is cruel and technology is dangerous, sometimes those two desires are at odds, especially when it comes to heavy, quick-moving, and increasingly automated vehicles.
“The Social Dilemma of Autonomous Vehicles,” a newly released paper published in Science, is an interesting glimpse into the dissonance people feel about how their self-driving cars should behave when it comes to putting owners and bystanders in danger.
The paper, co-written by researchers from the University of Toulouse Capitole, the University of Oregon, and MIT, is based on six online surveys of 1,928 people conducted in late 2015. The people, who were recruited through Mechanical Turk, were presented with different scenarios in which a self-driving car would be faced with the choice to sacrifice its passenger in order to save one or more pedestrians.
While most of the survey respondents felt that the car should choose to save multiple pedestrians over its passenger, they also said they wouldn’t want to buy that particular car. They’d choose one that promised to save its owner, no matter what.
Anybody who has taken an undergrad ethics class may recognize this as a variation on the “Trolley Problem,” a thought experiment meant to highlight an ethical distinction between doing harm and allowing harm: A trolley is on a track that will result in the death of five people, but you can pull a lever and switch it to a track where it will kill one person. Do you do it? This has now become the “Automated Trolley Problem.” No matter how smart driverless cars are, there are going to be some crashes that can’t be avoided. A car may have to choose between killing one pedestrian, killing a group of them, or killing its driver. A driver usually makes a split-second decision when this happens, but a driverless car will need that decision programmed in advance.
The survey-takers were pretty utilitarian when assessing a situation that didn’t involve them: if 10 people were being saved by sacrificing the car’s passenger, 76% of people said the right thing to do was to kill the passenger.
But when people were asked to imagine that they and a family member were the passengers in the self-driving car, their answers changed. While many still thought it was right to have the car sacrifice them and a family member to save more people, fewer thought so than before. In that study, even though they agreed it was right, fewer people said they’d buy a self-driving car programmed this way.
Then people were asked to give points toward what they considered the most moral algorithm, how comfortable they’d be with cars being programmed with an algorithm, and whether they’d buy a car programmed with a given algorithm. The conclusion?
“Once more, it appears that people praise utilitarian, self-sacrificing AVs and welcome them on the road, without actually wanting to buy one for themselves,” wrote the researchers.
Moreover, people were reluctant to have the government regulate this. If the government insisted that cars be programmed to save the many over the few, people said they’d be less likely to buy an automated vehicle.
Of course, the unspoken issue here is that people buying cars are unlikely to get to decide. As the authors note, the primary actors who are going to decide how cars are programmed are the manufacturers and the government. The influence consumers have here will be largely secondhand in the real world.
The worst case scenario is that we’ll have two classes of self-driving cars: the standard model programmed to save pedestrians and the premium model programmed to save its driver.
Ethan Chiel is a reporter for Fusion, writing mostly about the internet and technology. You can (and should) email him at [email protected]