A Blog by Jonathan Low

 

Nov 7, 2019

The Reason Software Design Decisions Led To Uber's Fatal Driverless Car Crash

Machine learning is having trouble processing erratic and sometimes illegal, which is to say typical, human behavior. JL

Timothy Lee reports in ars technica:

Radar in Uber's self-driving vehicle detected pedestrian Elaine Herzberg more than five seconds before the SUV crashed into her. Design decisions prevented the software from taking any action until 0.2 seconds before the deadly crash . Badly written software, not failing hardware, was responsible for the crash. Constantly switching classifications prevented Uber's software from accurately computing her trajectory and realizing she was on a collision course with the vehicle. At no point did the system classify her as a pedestrian because "the system design did not include consideration for jaywalking pedestrians."
Radar in Uber's self-driving vehicle detected pedestrian Elaine Herzberg more than five seconds before the SUV crashed into her, according to a new report from the National Safety Transportation Board. Unfortunately, a series of poor software design decisions prevented the software from taking any action until 0.2 seconds before the deadly crash in Tempe, Arizona.
Herzberg's death occurred in March 2018, and the NTSB published its initial report on the case in May of that year. That report made clear that badly written software, not failing hardware, was responsible for the crash that killed Herzberg.But the new report, released Tuesday, marks the end of NTSB's 20-month investigation. It provides a lot more detail about how Uber's software worked—and how everything went wrong in the final seconds before the crash that killed Herzberg.

A timeline of misclassification

Like most self-driving software, Uber's software tries to classify each object it detects into one of several categories—like car, bicycle, or "other." Then, based on this classification, the software computes a speed and likely trajectory for the object. This system failed catastrophically in Tempe.
The NTSB report includes a second-by-second timeline showing what the software was "thinking" as it approached Herzberg, who was pushing a bicycle across a multi-lane road far from any crosswalk:
  • 5.2 seconds before impact, the system classified her as an "other" object.
  • 4.2 seconds before impact, she was reclassified as a vehicle.
  • Between 3.8 and 2.7 seconds before impact, the classification alternated several times between "vehicle" and "other."
  • 2.6 seconds before impact, the system classified Herzberg and her bike as a bicycle.
  • 1.5 seconds before impact she became "unknown."
  • 1.2 seconds before impact she became a "bicycle" again.
Two things are noteworthy about this sequence of events. First, at no point did the system classify her as a pedestrian. According to the NTSB, that's because "the system design did not include consideration for jaywalking pedestrians."
Second, the constantly switching classifications prevented Uber's software from accurately computing her trajectory and realizing she was on a collision course with the vehicle. You might think that if a self-driving system sees an object moving into the path of the vehicle, it would put on its brakes even if it wasn't sure what kind of object it was. But that's not how Uber's software worked.

The system used an object's previously observed locations to help compute its speed and predict its future path. However, "if the perception system changes the classification of a detected object, the tracking history of that object is no longer considered when generating new trajectories," the NTSB reports.
What this meant in practice was that, because the system couldn't tell what kind of object Herzberg and her bike were, the system acted as though she wasn't moving.
From 5.2 to 4.2 seconds before the crash, the system classified Herzberg as a vehicle and decided that she was "static"—meaning not moving—and hence not likely to travel into the car's path. A little later, the system recognized that she was moving but predicted that she would stay in her current lane.
When the system reclassified her as a bicycle 2.6 seconds before impact, the system again predicted that she would stay in her lane—a mistake that's much easier to make if you've thrown out previous location data. At 1.5 seconds before impact, she became an "unknown" object and was once against classified as "static."
It was only at 1.2 seconds before the crash, as she was starting to enter the SUV's lane, that the system realized a crash was imminent.

“Action suppression”

At this point, it was probably too late to avoid a collision, but slamming on the brakes might have slowed the vehicle enough to save Herzberg's life. That's not what happened. The NTSB explains why:
"When the system detects an emergency situation, it initiates action suppression. This is a one-second period during which the [automated driving system] suppresses planned braking while the system verifies the nature of the detected hazard and calculates an alternative path, or vehicle operator takes control of the vehicle."
NTSB says that according to Uber, the company "implemented the action suppression process due to the concerns of the developmental automated detection system identifying false alarms, causing the vehicle to engage in unnecessary extreme maneuvers."As a result, the vehicle didn't begin to apply the brakes until 0.2 seconds before the fatal crash—far too late to save Herzberg's life.
Even after this one-second delay, the NTSB says, the system doesn't necessarily apply the brakes with full force. If a collision can be avoided with hard braking, the system brakes hard, up to a fixed maximum level of deceleration. However, if a crash is unavoidable, the system applies less braking force, initiating a "gradual vehicle slowdown," while alerting the driver to take over.
A 2018 report from Business Insider's Julie Bort suggested a possible reason for these puzzling design decisions: the team was preparing to give a demo ride to Uber's recently hired CEO Dara Khosrowshahi. Engineers were asked to reduce the number of "bad experiences" experienced by riders. Shortly afterward, Uber announced that it was "turning off the car's ability to make emergency decisions on its own, like slamming on the brakes or swerving hard."
Swerving was eventually re-enabled, but the restrictions on hard braking remained in place until the fatal crash in March 2018.
The Uber vehicle was a Volvo XC90 system that comes with a sophisticated emergency braking system of its own. Unfortunately, prior to the 2018 crash, Uber would automatically disable Volvo's collision prevention system when Uber's own technology was active. One reason for this, the NTSB said, was that Uber's experimental radar used some of the same frequencies as the Volvo radar, creating a risk of interference.
Since the crash, Uber has redesigned its radar to work on different frequencies than the Volvo radar, allowing the Volvo emergency braking system to remain engaged while Uber is testing its own self-driving technology.
Uber also says it has redesigned other aspects of its software. It no longer has an "action suppression" period before braking in an emergency situation. And the software no longer discards past location data when an object's classification changes.
Update: An Uber spokesperson sent us the following comment.
We regret the March 2018 crash involving one of our self-driving vehicles that took Elaine Herzberg’s life. In the wake of this tragedy, the team at Uber ATG has adopted critical program improvements to further prioritize safety. We deeply value the thoroughness of the NTSB’s investigation into the crash and look forward to reviewing their recommendations once issued after the NTSB’s board meeting later this month.

0 comments:

Post a Comment