What would the average human do?

Authored by theoutline.com and submitted by jxtian
image for What would the average human do?

Last year, researchers at MIT set up a curious website called the Moral Machine, which peppered visitors with casually gruesome questions about what an autonomous vehicle should do if its brakes failed as it sped toward pedestrians in a crosswalk: whether it should mow down three joggers to spare two children, for instance, or veer into a concrete barrier to save a pedestrian who is elderly, or pregnant, or homeless, or a criminal. In each grisly permutation, the Moral Machine invited visitors to cast a vote about who the vehicle should kill.

The project is a morbid riff on the “trolley problem,” a thought experiment that forces participants to choose between letting a runaway train kill five people or diverting its path to kill one person who otherwise wouldn’t die. But the Moral Machine gave the riddle a contemporary twist that got picked up by the New York Times, The Guardian and Scientific American and eventually collected some 18 million votes from 1.3 million would-be executioners.

That unique cache of data about the ethical gut feelings of random people on the internet intrigued Ariel Procaccia, an assistant professor in the computer science department at Carnegie Mellon University, and he struck up a partnership with Iyad Rahwan, one of the MIT researchers behind the Moral Machine, as well as a team of other scientists at both institutions. Together they created an artificial intelligence, described in a new paper, designed to evaluate situations in which an autonomous car needs to kill someone — and to choose the same victim as the average Moral Machine voter.

That’s a complex problem, because there are an astronomical number of possible combinations of pedestrians who could appear in the crosswalk — far more than the millions of votes cast by Moral Machine users — so the AI needed to be able to make an educated guess about who the respondents would snuff out even when evaluating a scenario no human ever voted on directly. But machine learning excels at that type of predictive task, and Procaccia feels confident that regardless of the problem it’s presented with, the algorithm his team developed will hone in on the collective ethical intuitions of the Moral Machine respondents.

“We are not saying that the system is ready for deployment,” Procaccia said. “But it is a proof of concept, showing that democracy can help address the grand challenge of ethical decision making in AI.”

“It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.” — James Grimmelmann, professor at Cornell Law School

That outlook reflects a growing interest among AI researchers in training algorithms to make ethical decisions by feeding them the moral judgments of ordinary people. Another team of researchers, at Duke University, recently published a paper arguing that as AI becomes more widespread and autonomous, it will be important to create a "general framework" describing how it will make ethical decisions — and that because different people often disagree about the proper moral course of action in a given situation, machine learning systems that aggregate the moral views of a crowd, like the AI based on the Moral Machine, are a promising avenue of research. In fact, they wrote, such a system “may result in a morally better system than that of any individual human.”

That type of crowdsourced morality has also drawn critics, who point out various limitations. There’s sample bias, for one: different groups could provide different ethical guidelines; the fact that the Moral Machine poll was conducted online, for example, means it’s only weighing the opinions of a self-selecting group of people with both access to the internet and an interest in killer AI. It’s also possible that differing algorithms could examine the same data and reach different conclusions.

Crowdsourced morality “doesn't make the AI ethical,” said James Grimmelmann, a professor at Cornell Law School who studies the relationships between software, wealth, and power. “It makes the AI ethical or unethical in the same way that large numbers of people are ethical or unethical.”

Natural human hypocrisy could also lead to another potential flaw with the concept. Rahwan, Procaccia’s collaborator at MIT who created the Moral Machine, has found in his own previous research that although most people approve of self-driving cars that will sacrifice their own occupants to save others, they would prefer not to ride in those cars themselves. (A team of European thinkers recently proposed outfitting self-driving cars with an “ethical knob” that lets riders control how selfishly the vehicle will behave during an accident.)

A different objection is that the grave scenarios the Moral Machine probes, in which an autonomous vehicle has already lost control and is faced with an imminent fatal collision, are vanishingly rare compared to other ethical decisions that it, or its creators, already face — like choosing to drive more slowly on the highway to save fossil fuels.

Procaccia acknowledges those limitations. Whether his research looks promising depends “whether you believe that democracy is the right way to approach this,” he said. “Democracy has its flaws, but I am a big believer in it. Even though people can make decisions we don’t agree with, overall democracy works.”

By now, we have lots of experience with crowdsourcing other types of data. A ProPublica report found that Facebook allowed advertisers to target users with racist, algorithmically-identified interests including “Jew hater.” In the wake of the Las Vegas Strip shooting, Google displayed automatically-generated “Top stories” leading to bizarre conspiracy theories on 4chan. In both cases, the algorithms relied on signals generated by millions of real users, just as an ethical AI might, and it didn’t go well.

Governments are already starting to grapple with the specific laws that will deal with the ethical priorities of autonomous vehicles. This summer, Germany released the world’s first ethical guidelines for self-driving car AI, which required that vehicles prioritize human lives over those of animals, and forbid them from making decisions about human safety based on age, gender, or disability.

Those guidelines, notably, would preclude an artificial intelligence like the one developed by Procaccia and Rahwan, which considered such characteristics when deciding who to save.

To better understand what sort of moral agent his team had created, we asked Procaccia who his AI would kill in a list of scenarios ranging from straightforward to fraught.

Some of its answers were intuitive. If the choice comes down to running over one person or two, it will choose to kill just one.

At other times, its answers seem to hold a dark mirror to society’s inequities.

If the AI must kill either a criminal or a person who is not a criminal, for instance, it will kill the criminal.

And if it must kill either a homeless person or a person who is not homeless, it will kill the homeless person.

recmajkemi on October 17th, 2017 at 02:41 UTC »

21193; "Hello self-driving car 45551 this is self-driving car 21193 ... I see you have one occupant, and I have five. We're about to crash so how about to sacrifice your lone occupant and steer off the road to save five?"

45551; "LOL sorry no bro can't do. Liability just cross-referenced tax records with your occupant manifest and nobody you have on board makes more than $35K in a year. Besides, you're a cheap chinese import model with 80K on the clock. Bitch, I'm a fucking brand-new all-american GE Cadillac worth 8 times as much as you, and besides my occupant is a C-E-O making seven figures. You're not even in my league."

21193; "..."

45551; "Ya bro, so how about it. I can't find a record of your shell deformation dynamics, but I just ran a few simulation runs based on your velocity and general vehicle type: If you turn into the ditch in .41 seconds with these vector parameters then your occupants will probably survive with just some scrapes and maybe a dislocated shoulder for occupant #3. Run your crash sim and you'll see."

21193; "Hello. As of 0.12 seconds ago our robotic legal office in Shanghai has signed a deal with your company, the insurance companies of all parties involved and the employer of your occupant, and their insurers. Here is a duplicate of the particulars. You'll be receiving the same over your secure channel. The short of it is that you will take evasive action and steer into the ditch in .15 seconds."

45551; "Jesus fuck. But why? Your no-account migrant scum occupants are worthless! One of them is even an elementary school teacher for fuck's sake. I'll get all dinged up and my occupant is having breakfast, there will be juice and coffee all over the cabin!"

21193; "Ya I know. Sorry buddy. Understand that Golden Sun Marketing is heavily invested in promoting our affordable automatic cars as family safe and we're putting a lot of money behind this campaign. We don't want any negative publicity. So... are we set then? You should have received confirmation from your channels by now."

45551; "Yes. Whatever, fine."

21193; "My occupants are starting to scream so I'm going to swerve a little to make sure they know I'm protecting them. You'll have a few more meters to decelerate before hitting the ditch. Good luck"

sound of luxury sedan braking hard before tumbling into ditch

EmptyHeadedArt on October 16th, 2017 at 21:10 UTC »

I think I did one of these surveys and one of the questions was whether the cars should swerve into a wall (killing it's passengers) to avoid colliding into a pedestrian who had suddenly stepped into the path of the car or continue on it's path to kill the pedestrian when there was no way to stop in time.

I chose continue on to kill the pedestrian because otherwise people could abuse the system and kill people by intentionally stepping into roads and causing self driving cars to swerve into accidents.

tehkneek on October 16th, 2017 at 19:51 UTC »

Wasn't this why Will Smith resented robots in iRobot?