AV Technology The safety of Autonomous Vehicles
Almost overwhelmingly, the highest priority for developers of autonomous vehicle (AV) platforms is safety. Consistently, both AV startups and established car companies alike cite this mandate as the most important quality for their vehicles to possess.
For as much testing as Society of Automotive Engineers (SAE) Level 4 ("high automation") AVs have gone through, the companies putting them on the road have been reluctant to place them in environmental conditions or locales that might pose a risk to passengers. In many quarters of the industry, a consensus is emerging that for SAE Level 4 or Level 5 ("full automation") to be successful universally, an AV will need to be 10 to 1000 times as safe as a car driven by a human.
In the United States, about 32,000 people die every year from automobile accidents, and a further two million are injured. The economic and social costs of motor vehicle accidents are estimated to be at least $800 billion per year. AV platform developers have vowed to bring those numbers down drastically.
Up until now, when a company set out to develop AV hardware and software, nearly always, the first and most important trait that's been specified for the platform to have is safety. If an AV isn't safe, it will result in tremendous liability for the maker of the vehicle it's installed in and possibly also for the platform creator. This is to say nothing of the bad publicity and reputational damage that any negative incidents—particularly any involving fatalities—can engender. But perhaps the most consequential long-term outcome of any accidents involving an AV platform will be the loss of sales and/or services for the company that developed it.
So far in the AV world, several notable incidents have occurred in which people have been killed.
In March 2018 in Tempe, Arizona, pedestrian Elaine Herzberg was hit by an Uber test vehicle traveling 43 mph while the AV was in SAE Level 3 self-driving mode. It was the first fatal accident involving a pedestrian in the AV industry, and the most serious accident Uber's AV Advanced Technology Group (ATG) had had up until that date.
Investigation by Tempe police determined the backup safety driver, Rafaela Vasquez, had been streaming the television show The Voice on her cellphone when the accident happened. According to AV regulations, drivers are supposed to keep their hands near a vehicle's steering wheel during SAE Level 3 operation, but video from inside the car showed this was not the case when this incident occurred.
Uber settled out of court with Herzberg's family for an undisclosed sum and voluntarily did not renew its permit to drive AVs in Arizona later that year. For its part, the state of Arizona did not charge the company, finding no criminal liability.
Tesla has had four fatal accidents involving its Autopilot platform. The first was in Handan, China in January 2016, when driver Gao Yaning activated the platform two minutes prior to crashing into a parked truck. In May 2016, a Tesla Model S belonging to Joshua Brown zoomed underneath the back end of a freight tractor-trailer in Williston, Florida. Almost two years later, in March 2018, Walter Huang's Tesla Model X hit a concrete barrier before colliding with two other vehicles and catching fire in Mountain View, California. And in March 2019, a Tesla Model 3 killed Jeremy Banner when the vehicle struck a semi-trailer truck making a left-hand turn.
Besides these incidents, three other non-fatal accidents involving Tesla's Autopilot have been publicized: one in Culver City, California in January 2018; another in South Jordan, Utah in May 2018; and one in Moscow in August 2019.
In August 2018, it was widely reported that an AV belonging to computer maker Apple's Titan Project had an accident in Sunnyvale, California. According to news reports, another car rear-ended the Apple test vehicle at low speed; there were no injuries.
Other AV platform companies experiencing non-fatal accidents have included Aurora, Drive.ai, JingChi, Zoox, Cruise, Toyota, and Google sister company Waymo. At least one writer speculated that in Waymo's case, a number of the fender-benders were due to Waymo vehicles obeying traffic laws too precisely; in other words, the cars were driving "too perfectly" for other drivers' tolerances.
If one looks at the safety records of AV platform developers, there are hardly any that have not had at least a few incidents over the course of their lifetimes—either near-misses, crashes, or collisions with objects, other vehicles, cyclists, or people.
In the state of California, only a handful of companies have gotten a license to operate driverless AVs on public roads. All other AV platform companies have been required to have a backup safety driver in the driver's seat to take control of a vehicle in case of incidents or situations the platform cannot handle.
The way the state of California tracks the roadworthiness of these AVs is via a metric called "disengagements." A disengagement is when an AV platform fails to process a situational or environmental hazard and disengages, transferring control of the vehicle to the safety driver.
This is considered a failure of the platform, and the developer's goal is for it never to occur, but in reality, this happens far more than platform companies would like to admit. Platform developers have "black boxes" built into each test AV to record all disengagements, which are then reported to the state's Department of Motor Vehicles.
In 2019, the top five companies experiencing disengagements (ranked from least to most) were: Waymo, Cruise, Zoox, Nuro, and Pony.ai. Waymo experienced .09 disengagements per 1000 miles driven, while Pony.ai experienced .98 disengagements.
Not all disengagements are reported publicly. For instance, Russian AV platform developer and Internet search engine company Yandex—which has tested AVs in Las Vegas, but not California—has steadfastly refused to release any of its disengagement data, instead only reporting kilometers driven in an autonomous mode.
Of course, to get a true picture of a company's AV safety record, the number of miles or kilometers a platform has been driven in all locales should be divided by the total number of disengagements it's experienced. Then, one would naturally want to look at the disengagements in question and try to find out why they occurred.
Were they all for the same reason? Did the platform have an innate weakness or disability with a particular driving scenario or weather condition? Did it fail to respond properly to particular hazards or moving objects? Does it have trouble with specific turns, sightlines, or blind spots?
"Brute-force" real-world testing
It's the goal of all AV platform developers to "work out the kinks" with their AV software and hardware. Some of this can be done with 3D simulations on computers, but much of it needs to be done in the real world, by putting a prototype vehicle through paces of various studies, scenarios, and practice runs. One paper put out by Rand Corporation said that AVs would need to accumulate at least 200 billion miles of real-world testing to be pronounced "safe."
While some vendors such as Intel/MobilEye have stated that 99 percent of all road crashes can be traced to 37 distinct driving situations, the remaining one percent still adds up to a lot of accidents—accidents that likely have thousands (or even millions) of causes.
As Brian Salesky, CEO of AV platform company Argo AI, puts it, one of the problems is that "you see all kinds of crazy things on the road"—by which he means other vehicles and pedestrians occasionally behave in a manner that they shouldn't (possibly even illegally). "The hard part is anticipating what they're going to do next," he says. The industry refers to these as "corner cases."
Hence, companies like Waymo that can afford it are employing a "brute force" approach to gaining the kind of real-world AV driving experience that will be necessary to be able to say that a platform is capable of handling whatever the road—and nature—throws at it.
Waymo and several other platform makers have constructed fake "cities" made up of a small geofenced area of sample streets and intersections designed to mimic real-life urban and highway environments. Along these thoroughfares, the companies have laid props to simulate buildings, road signs, obstacles, construction vehicles, and other local features. In many cases, passing vehicles and pedestrians are the real thing.
Says Steph Villegas, a former driver at Waymo's top-secret "Castle" city in California's Central Valley, "[Waymo] made conscious decisions in designing [Castle City] to make residential streets, expressway-style streets, cul-de-sacs, parking lots, things like that, so we'd have a representative concentration of features that we could drive around."
Nonetheless, "if a car is overly cautious, this [can become] a nuisance," states Huei Peng, the director of M City, an AV research center and testing facility at the University of Michigan.
A goal of 10x safety
As the AV industry matures, there's a growing awareness that universal SAE Level 4 functionality—under all types of weather conditions in any locale worldwide, urban or rural—may be further off in terms of deliverability than many previous estimates.
There's also a growing recognition that consumers—particularly in the U.S.—have doubts about the safety of AVs, with 75 percent of Americans expressing a lack of trust in a Deloitte study from 2017. In China, consumers appear to be much more eager to accept AVs, with 50 percent of those surveyed saying they would like to turn over driving duties to AVs sometime in the next five years, according to a study by Capgemeini Research Institute.
More and more, vendors are beginning to believe that universal acceptance of AVs at an SAE 4 or 5 Level won't happen until they're substantially safer than cars driven by humans. After all, even if a trip in a driverless AV could be faster and more efficient than one in a car driven by a human, if the passenger doesn't get to their destination in one piece, what's the point?
Currently, 94 percent of all auto accidents are caused by human error. Drunkenness, distraction, and fatigue are components of that number, making up 41, 10, and 2.5 percent respectively.
According to Wesley Shao, Director of Autonomous Driving at Chinese electric vehicle (EV) manufacturer Byton, there's a common belief among AV platform developers that public acceptance of AVs won't get up to 100 percent until AVs are at least ten times—one order of magnitude—as safe as vehicles driven by humans.
This measurement needs to be an industry standard, according to some AV company executives, such as Elon Musk, CEO of Tesla. Other industry figures, such as Herbert Diess, CEO of Volkswagen—the world's largest automobile group—believe the relative number should be 100 to 1000 times—two to three orders of magnitude—as safe.
Whether these multiples are achievable is another story. For now, some companies like Waymo are consciously starting to test-drive their vehicles in "manageable" environments like the suburbs of Phoenix, Arizona where the weather is reliably sunny and the amount of traffic on local roads is low. The idea is that once this environment has been mastered (Waymo currently operates its Waymo One SAE Level 4 robo-taxi service here), the company can move on to more "difficult" locales with unpredictable weather and heavier traffic.
Other firms, such as General Motors-backed Cruise are consciously testing their vehicles in extremely challenging urban environments, in cities such as San Francisco, precisely because of the difficulty they provide. These firms believe that eventually, once their platforms are able to overcome the many driving challenges inherent in these places, they'll be fully qualified to operate in more forgiving locations.
In either case, AV platform companies are slowly closing in on a target of "10x safety." Many industry observers believe that by 2030, such a target is achievable, but it may take an unpredictable amount of further simulation testing and real-world driving to get there.