A Blog by Jonathan Low

 

Jun 11, 2017

Can Robotic Cars Trust Humans?

In short, probably not. We're too easily distracted, plus we take too long to respond effectively in fast moving vehicles. And then there's that whole emotion thing to which humans are prone. JL

John Markoff reports in the New York Times:

We humans are easily distracted by our games, phones and mates. Automotive engineers, computer interaction designers and, yes, lawyers, wonder if self-driving cars will ever be able to count on us in an emergency. 23% of Americans would refuse to drive in autonomous cars and 36% would be so nervous that they would not take their eyes off the road. Current trips in light-duty vehicles average only about 19 minutes, a short duration for sustained productive activity or invigorating sleep,
Three years ago, Google’s self-driving car project abruptly shifted from designing a vehicle that would drive autonomously most of the time while occasionally requiring human oversight, to a slow-speed robot without a brake pedal, accelerator or steering wheel. In other words, human driving was no longer permitted.
The company made the decision after giving self-driving cars to Google employees for their work commutes and recording what the passengers did while the autonomous system did the driving. In-car cameras recorded employees climbing into the back seat, climbing out of an open car window, and even smooching while the car was in motion, according to two former Google engineers.
“We saw stuff that made us a little nervous,” Chris Urmson, a roboticist who was then head of the project, said at the time. He later mentioned in a blog post that the company had spotted a number of “silly” actions, including the driver turning around while the car was moving.
Johnny Luu, a spokesman for Google’s self-driving car effort, now called Waymo, disputed the accounts that went beyond what Mr. Urmson described, but said behavior like an employee’s rummaging in the back seat for his laptop while the car was moving and other “egregious” acts contributed to shutting down the experiment.
We humans are easily distracted by our games, phones and mates. And automotive engineers, computer interaction designers and, yes, lawyers, wonder if the self-driving cars they are working on will ever really be able to count on us in an emergency.
Engineers say they believe that cars will be intelligent enough to do all the driving, somewhere between five years and a decade from now, depending on whom you ask. But until then, what passes for autonomous driving will be a delicate ballet between human and machine: Humans may be required to take the wheel at a moment’s notice when the computer can’t decide what to do.
To outline a development path to complete autonomy, the automotive industry has established five levels of human-to-machine control, ranging from manual driving — Level 0 — up through complete autonomy, Level 5. In the middle, Level 3 is an approach in which the artificial intelligence driving the car may ask humans to take over in an emergency.
But many automotive technologists are skeptical that the so-called handoff from machine to human can be counted on, because of the challenge of quickly bringing a distracted human back into control of a rapidly moving vehicle.
“Do you really want last-minute handoffs?” said Stefan Heck, chief executive of Nauto, a start-up based in Palo Alto, Calif., that has developed a system that simultaneously observes both the driver and the outside environment and provides alerts and safety information. “There is a really good debate going on over whether it will be possible to solve the handoff problem.”
Nauto’s data shows that a “driver distraction event” occurs, on average, every four miles. Mr. Heck said there was evidence that the inattention of human drivers was a factor in half of the approximately 40,000 traffic fatalities in the United States last year.
Last month, a group of scientists at Stanford University presented research showing that most drivers required more than five seconds to regain control of a car when — while playing a game on a smartphone — they were abruptly required to return their attention to driving.
Another group of Stanford researchers published research in the journal Science Robotics in December that highlighted a more subtle problem. Taking back control of a car is a very different experience at a high speed than at a low one, and adapting to the feel of the steering took a significant amount of time even when the test subjects were prepared for the handoff.
“There is a motor-learning process if I haven’t been controlling the vehicle and I have to take control,” said J. Christian Gerdes, a Stanford University mechanical engineering professor who was one of the authors of the study.
The handoff challenge is compounded by what is known as “over-trust” by automotive engineers.
Over-trust was what Google observed when it saw its engineers not paying attention during commutes with prototype self-driving cars. Driver inattention was implied in a recent National Highway Traffic Safety Administration investigation that absolved the Tesla from blame in a 2016 Florida accident in which a Model S sedan drove under a tractor-trailer rig, killing the driver.
Solving the over-trust issue is a key to autonomous vehicles in the Level 3 category, where the computer hands off to humans.
The first commercial vehicle to offer Level 3 autonomy is expected to be released next month by Audi. A version of its luxury A8 model will be able to drive in stop-and-go freeway traffic up to 37 miles an hour while allowing drivers to pursue other tasks. The vehicle reportedly will notify drivers in emergencies, giving them eight to 10 seconds to intervene.
Despite these limited advances, many automotive technologists remain uncertain about whether technology will ever be able to operate smoothly with a human driver who may be reading email or playing World of Warcraft.
“I believe that Level 3 autonomous driving is unsolvable,” said John Leonard, a mechanical engineering professor at the Massachusetts Institute of Technology who has collected detailed examples of driving situations that are currently impossible for state-of-the-art autonomous driving systems. “The notion that a human can be a reliable backup is a fallacy.”
Yet, despite widespread skepticism, the automotive industry is spending heavily on artificial intelligence technologies designed to make cars safer before they are fully autonomous. The idea is that self-driving technology (warning lights, emergency braking) can help humans be safer drivers.
Gill Pratt, a roboticist who heads an ambitious Toyota research effort in Silicon Valley; Ann Arbor, Mich.; and Cambridge, Mass.; said he did not see the automation ratings — one through five — as a straight line of technical progress.
Instead, he said, he saw the ratings as different ways of addressing the same car-safety question, regardless of who or what is
in control.
Unlike many in the industry who say that advances in machine learning will soon make self-driving cars safer than those driven by humans, Mr. Pratt has pushed for less futuristic “guardian” technologies that could be added to a car the same way that anti-lock brakes, stability control, blind-spot warning lights and other features have become common.
One possible new feature being designed by the Toyota Research Institute is adding the ability not just to stop when a pedestrian is detected, but also to swerve to avoid an accident, he said.
Toyota is also working on technologies that will assist human drivers in remaining vigilant when they are required to oversee an autonomous driving system for long stretches of time. There is already a rich literature that explores the challenges of keeping airplane pilots vigilant; Toyota researchers say they will be able to develop techniques to maintain human driver attention.
Mr. Pratt said Toyota had not given up on the challenge of Level 3 driving. But to make a safe Level 3 car, he said, it may be necessary to develop technologies that see risks as much as 15 seconds in the future.
Still, over-trust will be a tough challenge to overcome. “Imagine if the autopilot disengages once in 10,000 miles,” he said. “You will be very tempted to over-trust the system. Then when it does mess up, you will be unprepared.”
And if all those issues do get resolved, there is one more question: Will people really use self-driving cars?
Last September, researchers at the University of Michigan Transportation Research Institute published results of a survey reporting that for 62 percent of Americans, an increase in productivity as a result of self-driving cars was unlikely.
The researchers found that 23 percent of Americans would refuse to drive in autonomous cars and 36 percent would be so nervous that they would not take their eyes off the road. An additional 3 percent said they would be too motion-sick to take advantage of the cars.
“Also of importance is the fact that current trips in light-duty vehicles average only about 19 minutes, a rather short duration for sustained productive activity or invigorating sleep,” the researchers concluded.

0 comments:

Post a Comment