Be honest, most of you would love to own a driverless car. Getting from A to B like Minority Report would be so much simpler, and the technology could feasibly save tens of thousands of lives. An autonomous car could be the best thing since sliced bread, or the washing machine, or cars that needed to be driven. However, there a few issues to iron out with the idea of a driverless car, not the least of which is the morality of a car crash.
Automated automobiles would be designed to keep you safe at all times, but what happens when the car must choose a difficult route to divert from a hazard, one that risks your life or the lives of those around you?
New York Magazine recently featured this interesting scenario to demonstrate how certain decisions made by driverless cars may be intentionally fatal. Imagine your driverless car turns a sharp bend and senses an accident. There are three people standing outside of a car in the middle of the road, staring at their mangled car and a dead deer. How should your car react? Should it plow into the stalled car and possibly kill three people, or veer into a nearby guardrail and possibly kill you? Neither choice is right or wrong, but any decision feels impossible. It sounds sci-fi, but least theoretically, a driverless car with a pre-coded morality — given enough research and testing — could be the biggest life-saving implement on our roads since the seat belt. In other words, the impossible decision would be worth it.
This hypothetical crash scenario is a more advanced version of the Trolley Problem, a classic ethics experiment designed to isolate the moral principles in decision making. It usually goes like this: A runaway trolley is headed towards five people unconscious on the tracks, and you can pull a switch to divert the trolley to a different track. If you do so, the trolley will careen down a track with one unconscious person on it. There is no way down to warn anyone in potential danger or slow down the trolley — you must act fast and live with your decision. In this version of the problem most people choose to pull the switch.
The other iteration of the trolley problem has the same five lives at stake, but this time you must push one person in front of the trolley to save them. Even though the math is the same — losing one life versus five — in this example most people choose not to push the person.
The difference with driverless cars is that they wouldn’t have to live with the consequences of metaphorically pushing a person in the path of a trolley. They will simply rely on mathematical algorithms to make life and death decisions in the instantaneous moments where required, and the passenger would have to trust their car enough to do so.
To learn how people apply moral principles to driverless cars, Jean-Francois Bonnefon of the Toulouse School of Economics in France posed a similar dilemma to study participants through Amazon Mechanical Turk. The resulting paper published at Arxiv.org concluded that “most participants wished others to cruise in autonomous vehicles more than they wanted to buy utilitarian autonomous vehicles themselves.” Meaning, as long as they weren’t the ones in the car making the impossible choice, people were fine with programming vehicles to sacrifice one person to save two.
The future of driving is certainly a fuzzy one, but there is a lot to get excited about with the possibility of driverless cars. Think of a car that can find parking in a big city for you, one that cannot be distracted because of texting, one that doesn’t ever need a designated driver…Even without an ethics program, these simpler benefits of autonomy could save thousands of lives. Maybe then sacrifice would make sense.
—
IMAGE: Steve Jurvetson, courtesy of Creative CommonsÂ