When your driverless car decides who lives and who dies
Imagine you’re riding along in your autonomous car on a crowded two-lane street. You’re tweeting, checking Facebook, and Instagramming some cool buildings as you pass by. Basically you’re enjoying the modern joy of a driverless world. Then you notice that oh crap….Houston, we have a problem. A school bus full of happy-go-lucky third graders is about to ram right into you and your robo-Prius. There’s only one option: swerve right to spare the kiddos. BUT that entails ramming into your granny instead.
You might choose to go for the bus and spare your loved one. But in a utilitarian world where supposedly robots will know best, will they choose the path to the least people harmed? That’s the question bioethicst Ameen Barghi posed today. (The question itself isn’t new. Ethicists have been asking variations of it for a while, but Barghi reimagines it in the context of driverless vehicles and artificial intelligence.)
The philosophical exercise does bring up some potentially important legal questions. If the software controlling a robo-car chooses to spare your grandmother and take out the kids instead—after all it’s learned your preferences, likes and dislikes—who’s liable? You, the software manufacturer, or the automaker? Are you or the machine on the hook for murder? What happens if the software malfunctions at a crucial moment? Right now, the law is murky on this topic, at best.