A Blog by Jonathan Low

 

Sep 20, 2018

Driverless Hype Collides With Merciless Reality

This might take a while. JL

Christopher Mims reports in the Wall Street Journal:

Cars can’t learn to drive simply by being trained on data about how real humans do it, no matter how much data you have. That’s why companies like Waymo have to break [self driving] into pieces that can be engineered rather than treating it like one giant data problem.” It’s not keeping to a lane that’s hard. It’s predicting what all those capricious and distracted human drivers around you might do next.
Mercedes-Benz unveiled its dream of a fully autonomous multipurpose vehicle this week. The announcement was full of buzzwords—the modular Vision Urbanetic “enables on-demand, sustainable and efficient movement of people and goods” and “reduces traffic flows, relieves inner-city infrastructures and contributes to an improved quality of urban life.”
Hardly a week goes by without fresh signposts that our self-driving future is just around the corner. Only it’s probably not. It will likely take decades to come to fruition. (Even a car like this Mercedes is more a sketch of what’s to come than an actual blueprint.) And many of the companies that built their paper fortunes on the idea we’d get there soon are already adjusting their strategies to fit this reality.
Uber, for example, recently closed its self-driving truck project, and suspended road testing self-driving cars after one of its vehicles killed a pedestrian. Uber’s chief executive even announced he would be open to partnering with its biggest competitor in self-driving tech, Alphabet Inc. subsidiary Waymo. Meanwhile, Waymo CEO John Krafcik recently said it will be “longer than you think” for self-driving vehicles to be everywhere.
“Self-driving technology has the potential to make our roads safer and cities more livable, but it will take a lot of hard work, and time, to get there,” says an Uber spokeswoman.
In the past two years, Tesla CEO Elon Musk planned, then scrapped a coast-to-coast autonomous road trip. And Lyft CEO John Zimmer’s 2016 prediction that self-driving cars would “all but end” car ownership by 2025 now seems borderline ridiculous.
There are many reasons the self-driving tech industry has suddenly found itself in this “trough of disillusionment,” and chief among them is the technology. We don’t yet know how to pull off a computer driver that can perform as well or better than a human under all conditions.
It turns out that the human ability to build mental models isn’t something that current AI can just learn, no matter how much data it’s fed. And even once we have the technology, we’ll still have to deal with all those unpredictable humans in cars, on bikes and scooters, and on foot. The more self-driving vehicles hit the road, the more pressing the safety concerns and legal and regulatory issues will become.
This means worries—mainly in academic circles—that America’s truck drivers will face “eroding job quality” because of autonomy are premature. It means cities don’t yet need to wonder what will become of their mass transit. And it means Uber and Lyft aren’t likely to ditch human drivers soon, and their investors should value them accordingly.
In the meantime, we’ll have to adjust to the reality that autonomous driving could be headed for narrower—but still transformative—applications. And if our desire for driverless taxis and delivery vans is strong enough, we might need to create dedicated roads for them.
Cars can’t learn to drive simply by being trained on data about how real humans do it, no matter how much data you have, says Gary Marcus, New York University professor and former head of Uber’s AI division. “That’s why companies like Waymo have to break [self driving] into pieces that can be engineered rather than treating it like one giant data problem,” he adds.
In Chandler, Ariz., Waymo actually set up a self-driving car service. It deserves credit for solving an enormously difficult problem—creating a driverless, fully autonomous taxi service. But the company accomplished this in part by carefully constraining the circumstances under which their vehicles drive.
The service only operates in an area the team has thoroughly mapped. Chandler is “well laid out and has modern roads and conditions,” says Nathaniel Fairfield, principal software engineer at Waymo.
Self-driving cars generally rely on the lidar detection system, whose lasers can be foiled by inclement weather. “It doesn’t rain a whole lot there and there’s no snow,” Mr. Fairfield says. Chandler also has less than 4,000 people per square mile, making it about 1/20th as dense as Manhattan.
Mr. Fairfield notes that Waymo is also constantly training its vehicles under far more difficult conditions. But this tells us nothing about when self-driving technology will come to places with actual seasons, less-than-perfect roads or higher population density.
Over a lifetime of driving, humans become expert at countless subtasks, from noticing distracted pedestrians to questioning the judgment of construction workers waving them through a work site. While much has been made of the total number of miles that various self-driving systems have racked up, conquering these little annoyances actually requires an enormous amount of intellectual labor by many teams of engineers.
While auto makers and investors are pouring huge sums into this field, competition is likely to remain thin until this results in another Waymo’s worth of effort on the technology. Waymo has been at it since 2009, with some of the best-compensated engineers on earth.
Even when (or if) we get a working, go-anywhere self-driving system, we would face myriad legal and behavioral challenges, says Meredith Broussard, author of “Artificial Unintelligence: How Computers Misunderstand the World.”
When a Tesla slammed into the back of a stopped firetruck at 60 miles an hour, the driver sued the auto maker, claiming the company misrepresented the capability of its Autopilot software. Who is liable when a self-driving car gets into an accident? We have yet to resolve the issue, which could lead to a sea change in how we insure vehicles.
It’s also not true that we must transition to self-driving cars because human-piloted ones are so lethal, Prof. Broussard says. Countless innovations have made cars radically safer since the 1950s and continue to do so. Meanwhile, new distractions, such as smartphones, can be addressed more cheaply, without resorting to full autonomy. Our love affair with self-driving cars is a form of “techno-chauvinism,” Prof. Broussard says. “It’s the idea that technology is always the highest and best solution, and is superior to the people-based solution.”
While we work all of that out, we’ll also need to start spending big money refashioning our cities, bike lanes and sidewalks so that they are friendlier to self-driving vehicles. This would have to coincide with the rollout of widespread and robust 5G wireless internet that could power a massive vehicle-to-vehicle communications infrastructure. If every car driven by every human was tracked, along with every autonomous vehicle, then perhaps humans and machines could share the road.
After all, it’s not keeping to a lane that’s hard. It’s predicting what all those capricious and distracted human drivers around you might do next. That’s what keeps the computer programmers up at night.
 

0 comments:

Post a Comment