Crown Waiting for Trolley Beside Traffic / Ethics Versus Data: Moral Dilemmas for the Self-Driving Car
Image: Pixabay

Should humans be able to control autonomous vehicles in emergency situations?

August 22, 2016

It's a modern-day trolley problem. If a self-driving vehicle carrying a passenger is about to hit a group of pedestrians, should it swerve away from the pedestrians into a nearby wall and risk the passenger, or should it hit the pedestrians to save the passenger?

A study published in Science is asking this and other similar questions. Researchers at the Massachusetts Institute of Technology (MIT), the University of Oregon, and the Toulouse School of Economics for the National Center for Scientific Research in France believe that such moral and ethical dilemmas must be solved—or at least addressed—by automakers, car buyers, and industry regulators before vehicles are given complete autonomy and allowed out on the road.

The situation is rife with paradoxes, as explained by Scientific American. "Most of the 1,928 research participants in the Science report indicated that they believed vehicles should be programmed to crash into something rather than run over pedestrians, even if that meant killing the vehicle's passengers," says the magazine. "Yet many of the same study participants balked at the idea of buying such a vehicle, preferring to ride in a driverless car that prioritizes their own safety above that of pedestrians."

In other words, what's good for the goose is good only for the goose, not the gander. Other people should put the lives of pedestrians first, but I shouldn't have to.

This attitude led the researchers to conclude that, if regulators and lawmakers prioritized pedestrians instead of passengers when it came to self-driving vehicles, fewer people would be likely to buy them. This would, in turn, lead to slower development of the technology, in spite of research that shows that self-driving cars may reduce traffic, pollution, and, most importantly, accidents. Such vehicles may save thousands of live per year: an estimated 90 percent of vehicle crashes occur at least partially because of human error, according to a 2013 blog post from The Center for Internet and Society at Stanford Law School (CIS), while the National Highway Traffic Safety Administration (NHTSA) reported an even higher rate of 94 percent in 2015.

The survey conducted by the researchers was based largely on a thought experiment in ethics known as the "trolley problem." Although there are several versions of the problem, says Scientific American, the scenario at the heart of the experiment remains relatively straightforward: a trolley is about to run over a group of people, and a watching bystander has to make a choice "between an intervention that sacrifices one person for the good of the group or protects an individual at the expense of the group."

Although the question is worth asking, the study is not perfect. It does not, for instance, allow for the ways in which the technology in development for controlling the vehicles actually work. One critic is Ragunathan "Raj" Rajkumar, a professor of electrical and computer engineering in Carnegie Mellon University's CyLab, who participated in that university's efforts to develop autonomous vehicles but was not involved in this study.

"This question of ethics has become a popular topic with people who don't work on the technology," he told the magazine. "AI does not have the same cognitive capabilities that we as humans have."

As such, the technology makes its decisions based on data gathered by sensors, including factors such as speed, weather, road conditions, and distance. The biggest problem is gathering and processing all relevant data fast enough to avoid dangerous situations from the outset. While Rajkumer knows that it will not be possible for the vehicles to do this in every single situation, he is more concerned with another problem: "the ability to keep [the vehicles] protected from hackers who might want to take over their controls while someone is onboard."

Meanwhile, the philosophical question at the heart of the issue also has practical implications for business, law, and government. "Should manufacturers create vehicles with various degrees of morality programmed into them, depending on what a consumer wants?" asks The New York Times (NYT). "Should the government mandate that all self-driving cars share the same value of protecting the greatest good, even if that's not so good for a car's passengers? And what exactly is the greatest good?"

And who will be at fault in the event of a fatal accident? "The law now assumes that a human being is in the driver's seat," explains IEEE Spectrum, flagship magazine of the Institute of Electrical and Electronics Engineers. "No matter how the laws and infrastructure evolve and how smart the cars become, bad things will still happen and manufacturers will end up in court. So far, we have no strictly applicable case law, for although Google cars have been involved in 17 accidents to date, the robot was at fault in none of them…. Most legal scholars think that an accident will lead to a major design-defect lawsuit."

As far as regulation goes, the federal government has already run into grey areas in regulating other machines using artificial intelligence technologies: drones. More specifically, armed drones. "In 2012, the Pentagon released a directive that tried to draw a line between semiautonomous and completely autonomous weapons," NYT reports. "They are not outlawed, but they must be designed to allow ‘appropriate levels' of human judgment over their use."

And attempting to place regulations around self-driving vehicles will prove still more complicated, according to University of Oregon researcher Azim Shariff. "Having government regulation might turn off a large chunk of the population from buying these AVs, which would maintain more human drivers, and thus more accidents caused by human error," he told The Guardian. "By trying to make a safer traffic system through regulation, you end up creating a less safe one, because people entirely opt out of the AVs altogether."

Nevertheless, argues Alan Winfield of the Bristol Robotics Laboratory, regulation will be necessary.

"Think of passenger airplanes," he said. "The reason we trust them is because it's a highly regulated industry with an amazing safety record, and robust, transparent processes of air accident investigation when things do go wrong. There's a strong case for a driverless car equivalent to the Civil Aviation Authority, with a driverless car accident investigation branch. Without this, it's hard to see how the technology will win public trust."

Source: The Trolley Problem, Science, Scientific American, CIS, NHTSA, NYT, IEEE Spectrum, The Guardian

Sponsored Links
Other Popular Stories