A Blog by Jonathan Low


Jul 6, 2016

Tesla and Google Take Very Different Roads To Self-Driving Car

What may be most interesting of all is that this article - and many like it - assume that Tesla and Google are the presumptive favorites to create the first couple of self-driving cars, as opposed to German, Japanese, Korean or American auto-makers. JL

John Markoff reports in the New York Times:

Google engineers (became) convinced it might not be possible to have a human driver quickly snap back to “situational awareness,” the reflexive response required for a person to handle a split-second crisis. So Google (is) taking the human driver completely out of the loop.
In Silicon Valley, where companies big and small are at work on self-driving cars, there have been a variety of approaches, and even some false starts.
The most divergent paths may be the ones taken by Tesla, which is already selling cars that have some rudimentary self-driving functions, and Google, which is still very much in experimental mode.
Google’s initial efforts in 2010 focused on cars that would drive themselves, but with a person behind the wheel to take over at the first sign of trouble and a second technician monitoring the navigational computer.
As a general concept, Google was trying to achieve the same goal as Tesla is claiming with the Autopilot feature it has promoted with the Model S, which has hands-free technology that has come under scrutiny after a fatal accident on a Florida highway.
But Google decided to play down the vigilant-human approach after an experiment in 2013, when the company let some of its employees sit behind the wheel of the self-driving cars on their daily commutes.
Engineers using onboard video cameras to remotely monitor the results were alarmed by what they observed — a range of distracted-driving behavior that included falling asleep.
“We saw stuff that made us a little nervous,” Christopher Urmson, a former Carnegie Mellon University roboticist who directs the car project at Google, said at the time.
The experiment convinced the engineers that it might not be possible to have a human driver quickly snap back to “situational awareness,” the reflexive response required for a person to handle a split-second crisis.
So Google engineers chose another route, taking the human driver completely out of the loop. They created a fleet of cars without brake pedals, accelerators or steering wheels, and designed to travel no faster than 25 miles an hour.
For good measure they added a heavy layer of foam to the front of their cars and a plastic windshield, should the car make a mistake. While not suitable for high-speed interstate road trips, such cars might one day be able to function as, say, robotic taxis in stop-and-go urban settings.
At the soonest, Google says it hopes to put such vehicles on the market by 2019.
“Safety has been paramount for the Google self-driving car team from the very beginning,” said Sebastian Thrun, the artificial intelligence researcher who created the Google project. “We wanted it to be significantly safer to the point where there would be no accidents ever.”
So far Google’s robotic cars have largely succeeded, with just one slow-speed fender bender caused by the robotic driver.
Still, no engineering consensus exists about the best path to vehicle autonomy. In and around Silicon Valley, at least 19 commercial self-driving efforts are underway, ranging from big carmakers like Nissan and Ford and technology giants like Google, Baidu and Apple, to shoestring operations like comma.ai.
Significantly, Toyota, the world’s largest carmaker, has not joined the rush to self-driving. It has established a research laboratory in Palo Alto, Calif., that does not aim to make a car that drives itself, but instead a “guardian angel” — a computerized system that would take over only when the human driver made an error.
Tesla’s Autopilot system was introduced with great fanfare and customer enthusiasm last October. It has prompted many videos by Tesla owners demonstrating a variety of hands-free driving, including videos posted by Joshua Brown, the driver killed in Florida.
Automotive engineers at work on autonomous vehicles refer to a design challenge they call “overtrust,” the possibility that humans may not fully understand the limitations of the self-driving safety features they rely on.
Tesla notes that Autopilot is meant only to assist drivers, not to replace them. And its onscreen warnings and owner’s manual emphasize that drivers should remain vigilant and keep their hands on or near the wheel at all times.
But in tempting its customers with a feature still so experimental that the company refers to it as a “beta” test, Tesla has sped far ahead of the rest of the industry and in the public’s perception of the state of self-driving technology.
Like Mr. Brown, the company’s customers tend to be technically knowledgeable and adventuresome types, who are not necessarily willing to wait for the go-slow approach taken by the rest of the industry. That includes one of Silicon Valley’s best-known engineers.
“Beta products shouldn’t have such life-and-death consequences,” said Steve Wozniak, co-founder of Apple, who owns a Tesla Model S.
Mr. Wozniak concedes he worries that the Autopilot feature might lead him to take his eyes off the road at the wrong moment. But that is a risk he is willing to run.
“Even though I might have a slight attention lapse at the exact wrong moment,” he said, “it’s easier to drive this way and not feel as tired.”


Post a Comment