Log In


Reset Password
  • MENU
    Auto Sponsored
    Wednesday, May 01, 2024

    Should autonomous vehicles be able to sacrifice themselves for the greater good?

    The trolley problem is a classic thought experiment that offers a tough moral choice. It sets up a scenario where a trolley is barreling toward five people trapped on the tracks. You have the ability to throw a switch to send the trolley onto a different track, where a single person is standing.

    If you do nothing, the runaway trolley will kill five people. Throwing the switch saves their lives, but also makes you directly responsible for the death of a person on the other track. Which option do you choose?

    Researchers recently applied a similar dilemma to self-driving vehicles, posing the question of whether self-preservation or the public good should prevail in an emergency. They issued a series of surveys to see whether people would support programming an autonomous vehicle to allow it to sacrifice itself—and its passengers—to avoid harming others on the road.

    The results of this study were recently published in the journal "Science." The study was authored by psychological scientist Jean-François Bonnefon of the Toulouse School of Economics in France, psychology professor Azim Shariff of the University of Oregon, and Iyad Rahwan, an associate professor at the MIT Media Lab.

    The study found inconsistent opinions on the issue. A majority of respondents said they approved of autonomous vehicles that could sacrifice themselves and their occupants to save others, but were less likely to ride in a vehicle capable of performing this sacrifice. Most respondents also opposed regulations requiring a vehicle to sacrifice itself if necessary, and said they would be less likely to purchase an autonomous vehicle if such regulations were in place.

    Proponents of autonomous vehicles say they offer a number of benefits, including a sharp reduction in crashes due to the elimination of human factors such as distracted driving and driving under the influence. However, the researchers say it is unlikely that all crashes can be eliminated. Some situations could require an autonomous vehicle to make a difficult choice, such as when it unexpectedly encounters pedestrians in the road.

    These situations would require the vehicle to quickly decide whether to avoid causing harm to the pedestrians or risk harming the vehicle occupants. For example, the vehicle might be faced with a situation where it can swerve and hit one pedestrian or continue driving and strike a group of pedestrians. It could also be faced with the decision of swerving and crashing into a wall, saving the pedestrians but potentially harming the vehicle occupants.

    "Although these scenarios appear unlikely, even low-probability events are bound to occur with millions of AVs on the road," the study states. "Moreover, even if these situations were never to arise, AV programming must still include decision rules about what to do in such hypothetical situations. Thus, these types of decisions need be made well before AVs become a global commodity."

    Between June and November 2015, the researchers conducted six online surveys with a combined total of 1,928 participants in the United States. There was a general consensus that autonomous vehicles should make a choice that would save the greatest number of lives. In the first study, 76 percent of 182 respondents said it would be a more moral choice for an autonomous vehicle to sacrifice one passenger rather than continue driving and kill 10 pedestrians.

    In the second survey, only 23 percent of 451 respondents said they thought it was the right moral choice for an autonomous vehicle to sacrifice a passenger to save a single pedestrian. However, they were more supportive of self-sacrifice as the number of pedestrians increased. This support continued even after respondents were asked to consider themselves, a co-worker, or a family member as a passenger in the scenario.

    Despite these preferences, respondents were unlikely to purchase an autonomous vehicle if these moral choices were programmed into it. Asked to rate their likelihood to buy a self-driving vehicle on a scale of 100, respondents gave a median score of 19 for a vehicle programmed to minimize casualties and potentially sacrifice the respondent and a family member in an incident. The median score rose to 50 for vehicles whose programming emphasized self-protection, even if it meant killing 10 to 20 pedestrians.

    "Most people want to live in a world where cars will minimize casualties. But everybody wants their own car to protect them at all costs," said Rahwan. "If everybody does that, then we would end up in a tragedy...whereby the cars will not minimize casualties."

    Respondents were also reluctant to accept regulations that would require an autonomous vehicle to make moral choices that could potentially harm its occupants, and were less likely to consider purchasing a vehicle with this type of programming. Among 393 respondents, the median score for the likelihood of buying an unregulated autonomous vehicle was 59. It fell to 21 for a regulated autonomous vehicle.

    The researchers say these attitudes create a variety of dilemmas. While people are naturally interested in reducing casualties on the road, they also have a personal interest in self-protection while riding in an autonomous vehicle. They also say that the delay caused by the programming of moral algorithms, as well as the public reluctance toward them, could affect the number of lives that could potentially be saved by the implementation of autonomous vehicles.

    "For the time being, there seems to be no easy way to design algorithms that would reconcile moral values and personal self-interest—let alone account for different cultures with various moral attitudes regarding life-life trade-offs—but public opinion and social pressure may very well shift as this conversation progresses," the study concludes.

    The study was published shortly before the automaker Tesla announced that a Model S had been involved in the first known fatal crash involving automated driving technology, raising concerns about its safety. Tesla's Autopilot software—which allows the vehicle to automatically control functions such as steering, speed, and emergency lane changes—was activated when the vehicle collided with a tractor trailer on May 7, killing the driver.

    Tesla said the crash was a result of "extremely rare" circumstances involving the height of the trailer and its positioning as the truck crossed the road. The automaker said the incident was also the first fatality in more than 130 million miles of Autopilot-activated driving, comparing it to the national average of one fatality for every 94 million miles traveled.

    Autonomous vehicles are expected to become more commonplace as several automakers and automotive companies work to develop fully self-driving cars. In June, the research firm IHS Automotive predicted that 76 million autonomous vehicles will have been sold by the year 2035.

    Response to the idea of riding in a self-driving vehicle has been mixed. In a recent survey of 618 U.S. drivers, the University of Michigan Transportation Research Institute found that only 15.5 percent said they would accept a fully autonomous vehicle while 45.8 percent preferred a vehicle with no autonomous features. However, 73 percent of the respondents in a survey of 1,517 American drivers by AlixPartners said they would be willing to turn over all driving responsibilities to an autonomous vehicle.

    Comment threads are monitored for 48 hours after publication and then closed.