Take a look at Capital And Main for the podcast version of this, and the original appearance of these ideas, among far more interesting ones. And to it…
Whether it takes 15 years or 50, it is almost certain that self-driving cars will be on the road within my lifetime. Waymo, the Alphabet company, is at 3 million test miles driven. Uber, BMW, Cruise (in partnership with General Motors), Mercedes, Volvo, Nissan, Ford, and others have also been logging hundreds of thousands of miles with no one behind the wheel. Tesla drivers, meanwhile, have notched 300 million miles using the company’s Autopilot autonomous driving feature.
At the exponential pace of software adoption, the technology is nearly ready for the road. But technological hurdles are one thing; cultural barriers are bound to prove far trickier to overcome.
A 2017 study by Deloitte found that three-quarters of Americans do not trust driverless vehicles. The American Automobile Association found that 54 percent of drivers feel less safe even sharing the road with fully autonomous cars. But how safe is safe enough?
To better understand this question – the main question causing vexation among insurers and others – think about the psychology of the driver. Implicit in my decision to get on the road and drive is an understanding that there are other drivers on the road, as well. And every interaction with another moving vehicle is an exercise in game theory: that is, intuitively modeling what I expect the other driver to do as a function of what I’m going to do. Yet how does game theory work with a self-driving car? Am I supposed to anticipate what an algorithm would do?
Self-driving cars tend to come in fleets. Each one is automatically part of a network of others that share its software. This allows the machine learning processes to run at the rate of *all* the driving data captured by all the cars on the road, instead of just one.
This also means that the cars can talk to each other at the speed of a super-fast wireless connection. In an interaction between two self-driving cars, the game theory is not between them, but between the entire fleet and any other entity, since they have perfect information about each other. Imagine being a driver, then, and approaching an intersection where all the cars know exactly what the others will do. Of course, each self-driving fleet will have its own software, and thus far, there is no sign that there will be any integration between different company fleets. So, the likelier scenario is: You approach an intersection, and you don’t know if the cars are talking about you or not. And if they are, what are the saying? Now consider a split-second decision at high speeds, or in confined space. Unnerving, right?
Each year, more than 30,000 Americans die and many more are injured in car accidents, the vast majority of which are caused by human error. Driverless cars could eliminate 90% of these deaths and injuries, according to experts. But these numbers—impressive as they are—may not matter very much.
The fear of a road full of self-driving cars is a fear that machine game theory, even if it is explicitly designed to avoid accidents, is not perfectly compatible with human empathy. And in the moments of inconsistency, what will the robot do? For example: If there is a chance to protect two pedestrians, but it requires potentially injuring a rider, which does a fleet of self-driving cars choose? What if you’re the one sitting in the passenger seat? Colloquially, ‘what if the software malfunctions?’ gets at the same point. Human malfunction is reasonably predictable. I know that accidents happen when a driver is drunk, distracted, sleepy, or overly upset. But what might cause a machine to crash—literally? Being unable to relate on a human level makes it much scarier, even though the reality may be much safer. Solving the social resistance to self-driving cars (beyond concerns that they could cost millions of jobs) will not come down to safety statistics, or even to miles driven without incident. Those don’t actually matter nearly as much as policymakers and technologists think. Signaling, and communicating a clear sense of fairness – and if not empathy, then something that rhymes with it – will matter much more.
Thanks to Rick Wartzman at Capital and Main for editing and publishing me on this one!