Survey Polls the World: Should a Self-Driving Car Save Passengers, or Kids in the Road?   

In 2014 the driver of a truck lost his brakes on a hill in Ithaca, N.Y., and had to decide between running over construction workers and plowing into a café. The man chose the latter, and a bartender died. It was a real-life case of one of moral philosophers’ favorite thought experiments: the trolley problem.  In their book Driverless: Intelligent Cars and the Road Ahead, Columbia University professor Hod Lipson and technology journalist Melba Kurman offer it as an example of the awful dilemmas that designers of self-driving cars must prepare for.

Today a team led by Edmond Awad, a postdoc at Massachusetts Institute of Technology’s Media Lab, has released the results of its online gamefied version of the dilemma, the “Moral Machine” experiment. Some 2.3 million volunteers across the world played out nearly 40 million scenarios, passing judgment on who should live and who should die in accidents involving a runaway self-driving car. (The results of this crowdsourcing experiment were reported October 24 in Nature.)

In its starkest form the dilemma is thankfully rare, and self-driving cars, being safer overall, may make it still rarer. But many philosophers and engineers argue it will be implicit in the routine choices that an autonomous machine must make on the road. “The core problem, I think, is going to occur many times a day in the real world, just not in a crazy crash dilemma,” says Patrick Lin, a philosophy professor at California Polytechnic State University, San Luis Obispo, who specializes in the ethics of emerging technologies and was not involved with the study. Indeed, drivers already make such choices, often without realizing. it. In a statistical sense every California stop, Pittsburgh left, or other dubious maneuver is a decision to kill some fraction of a person.

The Moral Machine Web site presented users with a cartoon of a car hurtling toward pedestrians in a crosswalk, and they could decide to swerve and run over some other group of people instead. Sometimes the alternative was a concrete barrier that would kill the car’s occupants. Unlike some other versions of the trolley problem, it was a straight choice between two groups of victims, without the extra complication of imagining shoving the person bodily into harm’s way. There was no time pressure, so you could weigh the two options. The prospective victims varied in number, age, gender and other features. They might be shown walking with a cane, pushing a stroller or carrying a bag of stolen money, and pedestrians might cross with or against the walk signal. You might see a car filled only with cats and dogs—it was self-driving, after all.

The site went live in June 2016, and the authors present data they collected through last December. News of it spread mostly through word of mouth, but also via prominent YouTubers PewDiePie and jacksepticeye. To see how your decisions compared with everyone else’s, you had to fill in a demographic questionnaire, and half a million people did so. They skewed male (by a three-to-one ratio) and young (with a peak age of 18).

They broadly agreed on what to do. They strongly preferred to save humans over pets, groups over lone people, kids over seniors, law-abiding pedestrians over jaywalkers and people carrying briefcases over hunched figures wearing tattered coats. They also had milder preferences for women over men, pedestrians over car passengers and joggers over heavyset people. Drivers were as likely to swerve as to let the car continue on its present course—they had no bias toward inaction. “That was the least of their concerns,” says one of the authors of the study, Azim Shariff of the University of British Columbia.

Demographic groups and nationalities disagreed only on the emphasis they placed on different factors. For instance, both men and women threw men under the bus, but the men were slightly less inclined to do so. “There’s no place for any of those moral dimensions where, for example, older people were preferred to younger people on average, or action was prioritized over inaction on average,” Shariff says. “It’s just that those moral priorities in that direction were held to be less, for certain countries, compared to the other moral priorities.”

The researchers found countries sorted into three distinct clusters. For instance, youth received highest priority in Latin America, less so in Europe and North America, and least in Asia. To be sure, geography was not always destiny. Czechs responded like Latin Americans and Sri Lankans like western Europeans.

The national variations tracked other indicators such as income, which the researchers took as evidence the predominately young male respondents were generally representative of their countries. People from poorer countries were less inclined to run down jaywalkers; those from high-inequality countries were more deferential to people with briefcases; and those from gender-equal countries were more chivalrous to women. This last tendency had an odd consequence: Because the average preference was already skewed in women’s favor, the gender-equal countries were even more skewed. So gender-equal countries were actually the most gender-unequal in this game.

The most telling disagreements were not among the peoples of the world, but between them and the experts. “They [people] suggest some unethical actions such as prioritizing dogs over criminals,” says Noah Goodall, a research scientist at the Virginia Transportation Research Council. People’s moral instincts on the trolley problem (and much else) are notoriously inconsistent. In an earlier series of studies  Shariff and two other Moral Machine co-authors—Jean-François Bonnefon of Toulouse School of Economics, and Iyad Rahwan of Media Lab—probed nearly 2,000 people’s views on self-driving cars. Participants said they thought a car ought to value the lives of occupants and pedestrians equally but they themselves would prefer to buy a car that prioritized the occupants. “That, we argued, is part of the social dilemma of these autonomous cars,” Shariff says.

Last year the German transport ministry, adopting the recommendations of a 14-member ethics commission, prohibited using gender or age to resolve trolley dilemmas. “The big value I see in the Moral Machine experiment is that it helps to sniff out key areas of disagreement that we need to address,” Lin says. Given time to reflect, most survey respondents might well agree with the experts. “We don’t know how people’s preferences when they’re playing a game online translates into actual behavior,” Shariff says.

Many moral philosophers also draw a stronger distinction between action and inaction than the survey participants did. “If you had to choose between two evils, and one is killing and the other is letting die,” Lin says, “then letting someone die is a lesser evil—and that’s why inaction is okay in the trolley problem.” He says defaulting to inaction has limits—What if the choice is between one person and 10?—but so does the utilitarian calculus preferred by survey participants. In a 2015 paper he cites the dilemma between hitting a biker who wears a helmet versus one who does not. The helmeted one is likelier to survive—but if that were the deciding factor, who would ever wear a helmet? (Indeed, some cyclists already use drivers’ reactions to helmets as an excuse not to wear one.)

Skeptics object that the whole exercise was way too simplistic to be of much use. “The scenarios are plausible, but just barely,” Goodall says. They imply an improbable cascade of failures: losing your brakes while driving at highway speeds in a pedestrian zone. Worse, they assume perfect knowledge and unequivocal outcomes whereas we normally operate in an epistemic fog. “Although the authors acknowledge this as a shortcoming, they fail to realize that it is a fatal shortcoming,” says Aimee van Wynsberghe, a professor at Delft University of Technology who specializes in techno-ethics. She thinks self-driving cars should be restricted to dedicated highways where they would not need to make fiddly moral calculations.

Lin, though, sees the project just like any other scientific experiment that strips a problem to its essence to make it tractable. “Yes, it’s contrived and artificial, just like most science experiments, but that doesn’t say anything about how useful it is or not,” he says. The trolley problem lets people hash out several contentious principles: whether inaction is tantamount to action; whether the number of people involved matters; and whether some lives are worth more than others. And the time for car companies to think through these trade-offs is now, before a jaywalker absorbed with his phone steps into the street. “They’re scripting out these decisions one to five years in advance,” Lin says. “They have a lot more time than a real person would if they’re in an actual crash dilemma.”

Shariff says the Moral Machine could also be bent toward other questions of machine ethics such as algorithms used by courts and parole boards to predict recidivism risk. A system might be more accurate—reducing the overall crime rate and keeping fewer people locked up—but less equal if the reduction does not occur evenly across races or other categories. Do people consider that an acceptable trade-off? “Should we be prioritizing accuracy or should we be prioritizing equality, even if it means putting more people unnecessarily in jail in order to be equal about it?” he adds. A dilemma, by definition, has no easy solution, but we should at least think through the options and be able to defend whichever we choose.

Source link

« Previous article VIMS issues annual dead-zone report card for the Chesapeake Bay
Next article » Lowly Moss-Like Plant Seems to Copy Cannabis