Self-driving cars have to be safer than regular cars. The question is how much.

Authored by vox.com and submitted by mvea
image for Self-driving cars have to be safer than regular cars. The question is how much.

One of the biggest questions surrounding self-driving cars isn’t a technological one, but instead philosophical: How safe is safe enough?

It’s not something for which there’s an easy answer.

Ever since the 2004 challenge that kicked off the autonomous vehicle push, excitement about the prospect of fleets of self-driving cars on the road has grown. Multiple companies have gotten into the game, including tech giants Google (with Waymo), Uber, and Tesla; more traditional automakers, such as General Motors, Ford, and Volvo have joined the fray. The global autonomous vehicle market is valued at an estimated $54 billion — and is projected to grow 10-fold in the next seven years.

As with any innovation, self-driving cars bring with them a lot of technical issues, but there are moral ones as well. Namely, there are no clear parameters for how safe is considered safe enough to put a self-driving car on the road. At the federal level in the United States, guidelines in place are voluntary, and across states, the laws vary. If and when parameters are defined, there’s no set standard for measuring whether they’re met.

Human-controlled driving today is already a remarkably safe activity — in the United States, there is approximately one death for every 100 million miles driven. Self-driving cars would, presumably, need to do better than that, which is what the companies behind them say they will do. But how much better isn’t an easy answer. Do they need to be 10 percent safer? 100 percent safer? And is it acceptable to wait for autonomous vehicles to meet super-high safety standards if it means more people die in the meantime?

Testing safety is another challenge. Gathering enough data to prove self-driving cars are safe would require hundreds of millions, even billions, of miles to be driven. It’s a potentially enormously expensive endeavor, which is why researchers are trying to figure out other ways to validate driverless car safety, such as computer simulations and test tracks.

Different actors within the space have different theories of the case on data gathering to test safety. As The Verge pointed out, Tesla is leaning into the data its cars already on the road are producing with its autopilot feature, while Waymo is combining computer simulations with its real-world fleet.

“Most people say, in a loose manner, that autonomous vehicles should be at least as good as human-driven conventional ones,” Marjory Blumenthal, a senior policy researcher at research think tank RAND Corporation, said, “but we’re having trouble both expressing that in concrete terms and actually making it happen.”

Completely self-driving cars on the road everywhere is probably a long way away

To lay some ground work, there are six levels of autonomy established for self-driving cars, ranging from 0 to 5. A Level 0 car has no autonomous capabilities — a human driver just drives the car. A Level 4 vehicle can pretty much do all the driving on its own, but in certain conditions — for example, in set areas, or when the weather is good. A Level 5 vehicle is one that can do all the driving in all circumstances, and a human doesn’t have to be involved at all.

Right now, the automation systems that are on the road from companies such as Tesla, Mercedes, GM, and Volvo, are Level 2, meaning the car controls steering and speed on a well-marked highway, but a driver still has to supervise. By comparison, a Honda vehicle equipped with its “Sensing” suite of technologies, including adaptive cruise control, lane keeping assistance, and emergency braking detection, is a Level 1.

So when we’re talking about completely driverless cars, that’s a Level 4 or a Level 5. Daniel Sperling, founding director of the Institute of Transportation Studies at the University of California, Davis, told Recode that fully driverless cars — which don’t require anyone in the car at all and can go anywhere — are “not going to happen for many, many decades, maybe never.” But driverless cars in a preset “geofenced” area are possible in a few years, and some places already have slow-moving, self-driving shuttles in very restricted areas.

To be sure, some in the industry insist that an era of completely self-driving cars all over the roads is closer. Tesla has released videos of cars driving themselves from one destination to another and parking unassisted, though a human driver is present. But maybe don’t take Tesla CEO Elon Musk’s promise of 1 million completely self-driving taxis by next year very seriously.

Safety is a societal question for which there are no easy answers

Human-controlled driving in the US today is already a relatively safe activity, though there is obviously a lot of room for improvement — 37,000 people died in motor vehicle crashes in 2017, and road incidents remain a leading cause of death. So if we are going to get fleets of self-driving cars on the road, we want them to be safer. And that doesn’t just mean self-driving technology — safety gains are also being made because cars are heavier, have airbags and other safety equipment, brake better, and roll over less frequently. Still, when it comes to how much safer exactly we want a driverless car to be, that is an open question.

“How many millions of miles should we drive then before we’re comfortable that a machine is at least as safe as a human?” Greg McGuire, the director of the MCity autonomous vehicle testing lab at the University of Michigan, told me in a recent interview. “Or does it need to be safer? Does it need to be 10 times as safe? What’s our threshold?”

A 2017 study from RAND Corporation found that the sooner highly automated vehicles are deployed, the more lives will ultimately be saved, even if the cars are just slightly safer than cars driven by humans. Researchers found that in the long term, deploying cars that are just 10 percent safer than the average human driver will save more lives than waiting until they are 75 percent or 90 percent better.

In other words, while we wait for self-driving cars to be perfect, more lives could be lost.

Beyond what counts as safe, there is also a conundrum around who is responsible when something goes wrong. When a human driver causes an accident or fatality, there is often little doubt about who’s to blame. But if a self-driving car crashes, it’s not so simple.

Sperling compared the scenario to another type of transportation. “If a plane doesn’t have the correct software and technology in it, then who’s responsible? Is it the software coder? Is it the hardware? Is it the company that owns the vehicle?” he said.

We’ve already seen the liability question on self-driving cars play out after an Uber testing vehicle hit and killed a woman in 2018. The incident caused a media uproar, and Uber reached a settlement with the victim’s family. The family of a man killed while driving a Tesla in 2018 sued the automaker earlier this year, saying that its autopilot feature was at fault, and the National Transportation Safety Board said in a preliminary report that Tesla’s autopilot was active in a fatal Florida crash in March. The family of a man who died in a 2016 Tesla crash, however, has said it doesn’t blame him or the company.

There is also a debate about what choices the vehicles should make if faced with a tough situation — for example, if an accident is unavoidable, should a self-driving car veer onto a pedestrian-filled sidewalk, or run into a pole, which might pose more danger to the people in the vehicle?

The MIT Media Lab launched a project, dubbed the “Moral Machine,” to try to use data to figure out how people think about those types of tradeoffs. It published a study about its findings last year. “[In] cases where the harm cannot be minimized any more but can be shifted between different groups of people, then how do we want cars to do that?” Edmond Awad, one of the researchers behind the study, said.

But as Vox’s Kelsey Piper explained at the time, these moral tradeoff questions, while interesting, don’t really get to the heart of the safety debate in self-driving cars:

[The] entire “self-driving car” setup is mostly just a novel way to bring attention to an old set of questions. What the MIT Media Lab asked survey respondents to answer was a series of variants on the classic trolley problem, a hypothetical constructed in moral philosophy to get people to think about how they weigh moral tradeoffs. The classic trolley problem asks whether you would pull a lever to move a trolley racing towards five people off-course, so instead it kills one. Variants have explored the conditions under which we’re willing to kill some people to save others. It’s an interesting way to learn how people think when they’re forced to choose between bad options. It’s interesting that there are cultural differences. But while the data collected is descriptive of how we make moral choices, it doesn’t answer the question of how we should. And it’s not clear that it’s of any more relevance to self-driving cars than to every other policy we consider every day — all of which involve tradeoffs that can cost lives.

At the time the study was released in Nature, Audi said it could help start a discussion around self-driving car decision-making, while others, including Waymo, Uber, and Toyota, stayed mum.

Measuring safety is going to be really hard

The harder question to answer when it comes to self-driving car safety might actually be how to test it.

There were 1.16 fatalities for every 100 million miles driven in the US in 2017. That means self-driving cars would have to drive hundreds of millions of miles, even hundreds of billions, to demonstrate their reliability. Waymo last year celebrated its vehicles driving 10 million miles on public roads since its 2009 launch.

Accumulating billions of miles of test driving for self-driving cars is an almost impossible endeavor. Such a project would be hugely expensive and time-consuming — by some estimates, taking dozens or even hundreds of years. Plus, every time there’s a change to the technology, even if it’s just a couple of lines of code, the testing process would, presumably, have to start all over again.

“Nobody could ever afford to do that,” Steven Shladover, a retired research engineer at the University of California Berkeley, said. “That’s why we have to start looking for other ways of reaching that level of safety assurance.”

In 2018, RAND proposed a framework for measuring safety in automated vehicles, which includes testing via simulation, closed courses, and public roads with and without a safety driver. It would take place at different stages — when the technology is being developed, when it’s being demonstrated, and after it’s deployed.

As RAND researcher Blumenthal explained, crash testing under the National Highway Traffic Safeway Administration’s practices focuses on vehicle impact-resistance and occupant protection, but “there is a need to test what results from the use of software that embodies the automation.” Companies do that testing, but there’s no broad, agreed-upon framework in place.

MCity at the University of Michigan in January released a white paper laying out safety test parameters it believes could work. It proposed an “ABC” test concept of accelerated evaluation (focusing on the riskiest driving situations), behavior competence (scenarios that correspond to major motor vehicle crashes), and corner cases (situations that test limits of performance and technology).

On-road testing of completely driverless cars is the last step, not the first. “You’re mixing with real humans, so you need to be confident that you have a margin of safety that will allow you not to endanger others,” McGuire, from MCity, said.

Even then, where the cars are being tested makes a difference. The reason so many companies are testing their vehicles in places such as Arizona is it’s relatively flat and dry — in more varied landscapes or inclement weather, vehicle detection and other autonomous capabilities become more complex and less dependable.

In November 2018, Waymo CEO John Krafcik said even he doesn’t think self-driving technology will ever be able to operate in all possible conditions without some human interaction. He also said that he believes it will be decades before autonomous cars are ubiquitous.

“If you listen to some of the public pronouncements, most companies have become much more modest over time as they encounter real-world problems,” Blumenthal said.

It comes down to public trust

It’s not just researchers, engineers, and corporations in the self-driving car sector that are working on parameters for defining and measuring safety — there’s a role for regulators to play. In the US, there’s not much of a regulatory framework in place right now, and policy on the matter is an unanswered question.

Regulators are still trying to determine what sort of data they can realistically expect to get and analyze in order to evaluate self-driving car safety.

Shladover explained that another part of the problem is how we’ve historically handled laws and regulations around driving in the US. At the federal level, the National Highway Traffic Safety Administration is in charge of setting vehicle safety standards and setting regulations for equipment and what gets built into vehicles. It falls under the aegis of the Department of Transportation, which is in the executive branch. In 2018, a NHTSA rule went into effect that requires new cars to have rearview technology. The rule stems from legislation enacted by Congress in 2008.

States, however, typically regulate driving behavior — setting speed limits, licensing drivers, etc. — and cities and municipalities can enact rules of their own, including around driverless cars. Self-driving car systems cut across the traditional boundaries between federal, state, and city government.

“Some of the driving behavior is actually embedded inside the vehicle, and that would normally be a federal responsibility, but the driving behavior and the interaction with other drivers is a state responsibility,” Shladover said. “It gets confused and complicated at that point.”

The NHTSA is currently seeking public comment on whether cars without steering wheels or brake pedals should be allowed on the road. (They’re currently prohibited, though companies can apply for exceptions.) There was a push in Congress last year to pass self-driving legislation, but it fell short. But federal, state, and local governments are still trying to figure out how to ensure safe behavior of automated driving systems and who should be in charge of it.

But putting in place guidelines and companies ensuring the public that self-driving technology is safe is essential in driving the technology forward. “Social trust of these systems and how these companies are operating is as important as the engineering, if not more,” McGuire said.

Self-driving cars — in their limited use — and automated technology have proven to be very safe thus far, but they’re not foolproof. The question we have to answer as a society is how we define safe, both in what it means and how we prove it. The idea of putting your life in the hands of a camera and a car is a daunting one, even if it is indeed safer.

We’re accustomed to the idea that sometimes accidents happen, and a human error can cause harm or take a life. But to grapple with a technology and a corporation doing so is perhaps more complicated. Boeing’s planes are still very safe, but after a pair of crashes that might be tied to one of its automated systems, its entire fleet of 737 MAX planes has been grounded. Yes, there’s a need to think about self-driving Tesla and Waymo cars rationally instead of based on fear, but it’s understandable to not be wary about the idea that a line of code could kill us.

Sperling told me he thinks Wall Street could play a role in improving safety — namely, investors aren’t going to back a company whose vehicles they deem unsafe. “If you build a car that has multiple flaws in it that leads to deaths, you’re not going to be in business very long,” he said.

It’s in the interest of Tesla, Waymo, GM, and everyone involved to get the safety question right. They have invested a lot in self-driving and automated technology, and they have made a lot of advancements. Cars with self-driving capabilities are an increasing reality, and that’s likely to only grow.

Recode and Vox have joined forces to uncover and explain how our digital world is changing — and changing us. Subscribe to Recode podcasts to hear Kara Swisher and Peter Kafka lead the tough conversations the technology industry needs today.

sjryan on May 18th, 2019 at 14:28 UTC »

"Study finds that if anything becomes safer, it will be safer." In other science news, mathematician verifies that three is still the only known integer to exist between two and four.

GopherAtl on May 18th, 2019 at 14:03 UTC »

I don't see how this even required a study. Basic logic - if they're safer, then switching saves lives, so switching sooner saves more lives...?

I mean, proving they're safer needs studies, but the study implied by the title is nonsensical.

test6554 on May 18th, 2019 at 13:23 UTC »

Everybody has an imagined notion of how good a driver they are. I think that’s the target to beat. A high bar for sure.