ccidents involving AVs get a lot of publicity because they provide unique stories and because companies that are vulnerable to AVs want to delay them as much as possible. An AV accident, especially one involving a fatality, will get published hundreds of times across a range of publications. Fatal accidents involving human drivers are so commonplace that they receive only a brief mention in the local press, if at all.
Are AVs safe?
In theory, and so far, in limited experience, AVs are proving to be much safer than human drivers.
I read about the pedestrian who was killed by an Uber autonomous vehicle; what’s the story behind that?
The tragic death of a pedestrian hit by an Uber autonomous vehicle (AV) provides some valuable lessons. Elaine Herzberg, 49, was walking her bicycle far from the crosswalk on a four-lane road in the Phoenix suburb of Tempe about 10 PM on Sunday, March 18, 2018, when she was struck by an Uber autonomous vehicle traveling at about 38 miles per hour. The Uber Volvo XC90 SUV test vehicle was in autonomous mode with an operator behind the wheel.
“The pedestrian was outside of the crosswalk. As soon as she walked into the lane of traffic she was struck,” Tempe Police reported at a news conference. The video footage shows the Uber vehicle traveling in the rightmost of two lanes. Herzberg was moving from left to right, crossing from the center median to the curb on the far right.
In viewing the video from the accident, it was clear that a human could not have reacted in the 2 seconds from when the victim became visible. However, it was subsequently determined that the Uber AV’s lidar detected her a few seconds earlier when she was in the dark and not visible. The software on the AV didn’t immediately identify her as a human with a bike and thought it was just a harmless object. Many AV experts see this as a failure of the Uber software. Uber suspended its AV testing eight months following the accident.
This case is both an example of an accident that would not have been prevented by a human driver and one that should have been avoided by an AV, but it wasn't.
What about the accidents involving Tesla vehicles operating in semi-autonomous mode?
Tesla vehicles have been involved in a couple of fatal accidents. On March 23rd, 2018, a Tesla Model X crashed into a freeway divider killing the driver. The NTSB concluded that the vehicle accelerated from 62 to 71 MPH before the crash with no braking or evasive steering. The autopilot was engaged, the speed was set for 75, and the driver's hands were not on the steering wheel for the prior 6 seconds. In reviewing the video of the location and a video by a subsequent Tesla driver who drove past the same spot in autopilot, it is possible that the vehicle misread the distorted painted lines that veered off to the left, and moved the vehicle in that direction, accelerating as the car previously in front was no longer in its path. Tesla pointed out that the crash barrier was defective because it had been hit earlier by another vehicle.
A Tesla was also involved in an earlier fatal accident when it accelerated into a truck because it couldn't distinguish the white side of the tractor-trailer from the sky. The monochrome cameras saw the large white side of the truck stretched across the road as the sky, so the vehicle didn't stop. The real problem though was that the driver put the car into auto-pilot mode, was not paying attention, and ignored the warnings from the car to pay attention to driving.
Tesla was not judged to be at fault for the accidents.
Are companies required to file accident reports when testing AVs?
Yes, they are. California, where much of the testing is done, requires an accident report to be filed with the state for every accident involving an AV.
Any preliminary assessments from these reports?
The reports from 2014 (when few vehicles were operating) through the first half of 2018 showed 38 accident reports while AVs were in operation. Only one of these was caused by the AV; the remainder were caused by human drivers.
How can you tell who is at fault when an AV has an accident?
Unlike human drivers, AVs have an exact memory. They record video and digital data of all activity before and during an accident. There is irrefutable proof of how the accident was caused.
Can AVs learn from accidents?
Any accident involving an AV is taught to all other AVs operated by that company, and in some cases, to all AV being developed. After the Uber pedestrian accident, for example, all companies developing AVs made sure that their AVs would avoid that same case situation. Human drivers, in contrast, don't learn for the accident experience of others.
So, are AVs safe?
Yes, they are currently safer than human drivers and will continue to get better and safer. However, there will continue to be negative publicity. Every accident will be dramatically reported followed by biased editorials against AVs. Several large factions stand to lose if auto accidents are drastically reduced. This includes auto insurance agencies and companies, as well as trial lawyers. They have paid industry associations to encourage media to delay AVs.
Is this a little like the Luddites from old England?
The Luddites were a group of English textile workers in the 19th century with a radical faction that destroyed textile machinery as a form of protest. The group was protesting the use of machinery in a "fraudulent and deceitful manner" to get around standard labor practices. It is a misconception that the Luddites protested the machinery itself to halt the progress of technology. Over time, however, the term has come to mean one opposed to industrialization, automation, computerization, or new technologies in general.
Will AVs be introduced slowly or thrust upon us all at once?
AVs are being introduced in baby steps. They have been tested with drivers for years and millions of miles. Now they are being introduced as ARS serving minimal routes and a select group of passengers. Gradually as they prove to be safe, then the number of routes and eligible passengers will increase.