A Blog by Jonathan Low

 

Jul 5, 2018

Automakers Are Trying To Determine How Much Uncertainty Is Acceptable For Autonomous Vehicles

Humans are actually pretty good - which is to say, safe - drivers.

But the popular perception is that they are not as infallible as machines. That means that for driverless/self-driving/autonomous vehicles to achieve popular acceptance, it may be necessary for performance to exceed that of humans to such a degree that may be uneconomical as well as unrealistic. JL


Alexander Wang reports in Venture Beat:

AI may be well-versed in basic driving, or identifying pedestrians under expected circumstances. (But) edge cases abound. Addressing the unpredictable is important to autonomous driving because every unexpected obstacle is potentially a life-or-death safety issue. We expect humans to make mistakes, but we find it less acceptable when machines fail. Autonomous cars may need to be 10 times safer than human drivers to earn widespread acceptance. Deep learning isn’t well suited to provide such assurances.
To meet the goal of autonomous vehicles that can operate safely and without any need for human input — that is, L5 automation — automakers must train AI systems to navigate myriad conditions they’ll run into in the real world so that they don’t actually run into anything in the real world). Our highways and roads are, as we all know from experience behind the wheel, wholly unpredictable places, and they’ll continually require self-driving cars to instantly interpret and react to “edge case” scenarios.
While machine learning can guide AI to develop a recognition of, and reaction to, scenarios that it has seen many times before, there’s an immense hurdle in training AI for one-in-a-million (or billion) situations. For example, AI may be well-versed in basic freeway driving, or identifying pedestrians under expected circumstances. However, edge cases abound. Freeways may be littered with everything from tire scraps to sofas to grandmothers chasing after ducks; Halloween costumes can make pedestrians difficult to detect; you can set traps for autonomous vehicles; and even electric scooters can prove problematic for AVs. There will always exist “unknown unknowns” that companies cannot simulate because no one could foresee what to simulate.
Successfully addressing the unpredictable is particularly important to autonomous driving because any and every unexpected and undetected obstacle is potentially a life-or-death safety issue. Because of this, the stakes for addressing edge cases are enormous for the industry.

Solving deep learning’s edge case constraints

The application of deep learning to solving these edge cases snags on a major issue: Deep learning isn’t well suited to provide such assurances. While it’s possible to determine an AI application’s accuracy against a known dataset, there’s no guarantee of performance in real-world situations where edge cases occur and unfamiliar data must be processed. Deep learning systems deliver stunning results and beat expectations when dealing with datasets that are very similar to what they’ve previously encountered. But, because they have limited abilities to extrapolate information, there’s no way to predict how they’ll function in those outlier scenarios. In fact, a good deal of deep learning theory supports the idea that, at some level, it’s not really possible for these systems to understand a domain of data different from what they’ve been trained on.
Faced with this limitation, the most successful strategy today is simply to provide an AI application with huge amounts of data so that it becomes familiar with as many potential edge cases as possible. This brute force method calls for cars to be driven all the time to build experience with ever more unfamiliar scenarios, and then have that data added to the system. Doing so then requires that autonomous vehicle system manufacturers have the infrastructure to label and support those incredible volumes of data.
However, even if manufacturers could have that near-infinite amount of data right this minute, it would still be impossible to prepare autonomous cars for everything they might see on the roads. The world changes, human behavior changes, new cars and objects (like the aforementioned scooters) are introduced, etc. Given this reality, there’s a tremendous onus placed on vehicle system manufacturers to determine how they’ll ultimately address the uncertainty of the real world, while providing a safe and comfortable autonomous vehicle experience that isn’t littered with false positives (i.e., unnecessary braking) for every unknown.

Establishing ways to measure success

Today, AV industry players naturally take an adversarial stance in competing over who has the best technology. However, the questions faced here are so existential and challenging — such as determining how best to test whether autonomous vehicle technology is truly ready for safe use — that industry leaders will likely need to band together to answer them. At the same time, customers and society as a whole need to be convinced that this technology is safe and beneficial. I believe an open standard, one established by the industry to test and verify the safety of autonomous vehicles, could help serve this purpose. Collaboration and edge case data sharing might very well be the best strategy for everyone.
As to the level of safety that autonomous vehicles must achieve, it’s important to recognize two things. First, humans are very good drivers; we only have one accident every 165,000 miles. Second, there’s a double standard in our expectations when it comes to humans and machines. We expect humans to make mistakes, but we find it much less acceptable when machines fail. Given these expectations, autonomous cars may need to be 10 times safer than human drivers to earn widespread acceptance. While we’re likely years away from seeing L5 fully autonomous vehicles able to navigate through the infinite edge case scenarios the roadways throw at them, we’ll be an order of magnitude safer when we get there.

0 comments:

Post a Comment