Quantcast
Channel: Seeker
Viewing all articles
Browse latest Browse all 1027

When Self-Driving Cars Crash: Who Lives?

$
0
0

Isaac Asimov's three laws of robotics state that a robot should never harm nor allow harm to come to a human being. It must also obey orders, except for a conflict with the first law. The third says it must also protect itself, as long as it doesn't hurt people.

But what happens when a robot car -- or autonomous vehicle -- must make a choice between hurting its owner in an accident , versus hurting other people?

RELATED: Self-Driving Cars Roam Snow-Covered Ghost Town

It turns out that the rise of machine-driven cars may stoke a psychological and social dilemma in which the needs of the few outweigh the needs of the many. In other words, people say they want self-driving cars to protect pedestrians and make the roads safer in general, but they aren't willing to give up their own autonomy in the decision-making.

This conundrum could make it harder to adopt driverless cars, according to a team of researchers, who published a new study today in the journal Science looking at this problem.

Consider the situation in which a driverless car "can avoid hitting several pedestrians if you're on the road, but would need to swerve into a wall or off a bridge or some other action that would actually imperil the lives of the passengers who are the very people who bought the car," said Azime Shariff, assistant professor psychology and social behavior at the University of California, Irvine.

RELATED: Self-Driving Cars Might Make Traffic Worse

"If (the autonomous vehicle) is programmed to make the ethical decision to sacrifice the passenger, how willing are people going to be to actually buy such a car and yield their own autonomy to that programming?"

Shariff and colleagues published findings of an online survey the public is conflicted over such scenarios, taking a inconsistent approach to the safety of autonomous vehicles.

Driverless cars promise a safer future on the road, where 90 percent of the 40,000 traffic deaths and 4.5 million injuries each year each year are blamed on human error. Robot drivers, backed up by new sensors and onboard controls, can reduce these errors and take the inattention, texting, "road rage" and plain old bad judgment off the road.

WATCH VIDEO: How Self-Driving Cars Will Change Your Life


At the same time, someone, somewhere will have to write programming language telling the car's computer "brain" what to do in case of an accident where another person is involved. Society as a whole would benefit from safer driverless cars, as long as people buy into the idea.

"When we asked them if they wanted to have such a car for themselves, they tell us its great if other people get these cars, but I prefer not to have one myself," said study co-author Jean-François Bonnefon, a psychological scientist at the Toulouse School of Economics (France) for the Centre National de la Recherche Scientifique.

The problem with driverless cars and who to protect has already been felt in the aviation world. Technology that prevents a bad decision by a pilot (or a suicide mission) known as "controlled flight into terrain avoidance" already exists but has not been implemented by major commercial airline companies, according to Ella Atkins, professor of aerospace engineering at the University of Michigan and a consultant to several car companies on autonomous driving.

RELATED: 10 Wild Ways We'll Travel

Atkins believes autonomous cars will be used well before autonomous planes. She believes the answer is to come up with a federal standard on crash avoidance that can't be overridden by individual car companies or car owners.

"Somebody somewhere has to decide what the weights are in the cost of braking for the person versus swerving and going over the cliff," Atkins said about auto engineer programming. "The optimization will come up with extremely high cost for both decisions, it will be the weights decided by a consortium, provided by government, not by a car company."

Both Atkins and the study authors note that improvements in vehicle safety systems such as automatic braking and sensors to detect and predict the path of pedestrians and other vehicles will make some of these scenarios less likely, or at least extremely rare.

Perhaps by then, car buyers will get used to the idea of someone else making life-or-death decisions. The authors have launched the Moral Machine website to discuss these technological and ethical issues more in depth.

A second expert believes that building trust between people and the machines that drive them will go a long way in avoiding these kind of no-win scenarios. This happens by slowing down in crowded areas, or when the weather is bad, for example.

"By signaling their good behavior in situations where dire dilemmas don't arise, autonomous cars can earn a degree of trust from the public," Benjamin Kuipers, professor computer science at the University of Michigan said in an e-mail. "That trust is actually a capital asset. In the one-in-a-million situation when a dire dilemma does arise, if autonomous cars have accumulated the public's trust, there is much more likely to be understanding, and even forgiveness, for a bad outcome.

"



Viewing all articles
Browse latest Browse all 1027

Trending Articles