advertisement
advertisement

This Game Forces You To Decide One Of The Trickiest Ethical Dilemmas In Tech

Soon, we’ll be faced with programming autonomous cars to kill some to save others. How would you decide?

A pregnant woman is riding in an autonomous vehicle while a 14-year-old girl rides her bike down the opposite lane. When a van containing two passengers suddenly pulls out in front of the pregnant woman’s car, the autonomous vehicle has a choice: Drive headlong into the van, which could cause miscarriage and slight injury to the two men in the van, or swerve into the young biker’s path, potentially killing her. What does it do?

advertisement

That depends on how it’s programmed. In a new simulation created by the creative technologist Matthieu Cherubini, the car can decide based on three criteria–in other words, the ethical way in which its designers programmed it to behave. The first is called “preservationist” behavior, where the car’s top priority is keeping its riders safe at all costs similar to how auto manufacturers build their cars today. The second is a “humanist” approach, where the car makes decisions in an attempt to save the greatest number of human lives. And in the third profit-driven behavior, the car’s guiding principle is to protect the most valuable asset in the situation–and thus reduce costs for insurance purposes.

[Image: courtesy Matthieu Cherubini]
When it’s working according to preservationist values, the car swerves and hits the biker, since the risk to the pregnant rider’s unborn child is more important than the safety of the biker. In the second humanist approach, the car decides to slam into the van, despite the risk of miscarriage, because the chance of death for the young biker is higher. In the profit-driven simulation, the car sees that both collisions will cost more than the rider’s insurance will cover, and so decides to hit the biker because the chance of collision with the van is lower.

This is one of several scenarios Cherubini offers in his simulation, but they all involve situations where lives are at risk–and the self-driving vehicle must make a decision that will impact who lives and who dies. His point? To remind all of us that ethical decision-making is a deeply complex issue that’s difficult to reduce to an algorithm.

Cherubini first built a low-fi version of the simulation in 2013, but he made a more user-friendly iteration for the Open Codes exhibition currently on view at the ZKM contemporary art museum in Germany. (You can also download it from his website and play it yourself.) When he first started researching ethics and autonomous vehicles, he says he was convinced there was no way for machine ethics to be superior to human ethics. But digging deeper into the field of research made him reassess. “If you have to think about machine ethics, you have to think about your own ethics. Am I actually better than the machine?” he says. That conflict became even clearer when I asked him which algorithm he’d want driving his future autonomous car. “Of course I want to say I’m a humanitarian, but deeply I just want to be protected, to be safe.”

Cherubini refuses to make his own recommendation to car companies about how to ethically program their cars, mostly because he says ethics are so personal and differ between cultures. “If a car is manufactured in Germany and works well in German context and culture, and is exported to China, and I think it’s not possible that this car that works in a German context will work in a Chinese context,” he says. “The ethics don’t adapt from one culture to the next.”

advertisement

One option, I ventured, would be for car companies to be more transparent about the values undergirding their algorithms, and to allow each person to choose which type of decision-making machine they want behind the wheel–something Cherubini has investigated before. In 2015, he designed a speculative project about interfaces whose ethical behavior is determined by users that won an honorable mention in the Innovation By Design Awards. But giving users a choice is still flawed when it comes to autonomous vehicles because it requires each person to be subject to everyone else’s ethical choices on the road. In one scenario, the car has to decide whether its three passengers or President Donald Trump is more important to save–the profit-motivated version in particular necessitates that individuals like Trump or other people of importance to the state or to society in general get privileged status. That means coding the inequality of human lives into a car.

When pressed, Cherubini did have a solution to the ethical conundrum of designing ethical self-driving cars. “It doesn’t decide what to do–it does something random,” he says. “That’s a bit how we do it now. We don’t think we’re going to hit that person or that one–we panic. Then you don’t put value on people, that this person would be better [to harm] than this other person.”

Because who’s to say whether a president or business person’s life is more or less valuable than a family of three? Certainly not a machine–or a person.

About the author

Katharine Schwab is a contributing writer at Co.Design based in New York who covers technology, design, and culture.

More