A Blog by Jonathan Low

 

Oct 23, 2018

Why Human Drivers Keep Rear-Ending Self-Driving Cars

Autonomous vehicle appear to be involved in more rear-end collisions than any other kind of accident. The reason may be that the AVs are programmed to drive very cautiously and so stop unexpectedly because their Lidar (radar) systems pick up potential threats that human drivers recognize but the computers do not, causing the AV to stop while the human driver behind, having dismissed the issue, continues.

The computerized cars will learn over time, but the problems may multiply in the meantime. JL


Jack Stewart reports in Wired:

Evidence suggests these vehicles drive in ways humans might not expect, and might not want them to. AVs make cars behind them more likely to hit them: driving herkily-jerkily, or stopping for no clear reason. It indicates a focus on safety: Better to stop for a fire hydrant than run down a preschooler. But part of being a good driver is behaving in a way others expect, which doesn’t include constantly stamping on the brakes. The runner-up of crash types is sideswipes, which appear to involve human drivers frustrated at getting stuck behind a slow AV, trying to overtake it, and not making it.
The self-driving-car crashes that usually make the news are, unsurprisingly, either big and smashy or new and curious. The Apple that got bumped while merging into traffic. The Waymo van that got T-boned. And of course, the Uber that hit and killed a woman crossing the street in Tempe, Arizona, in March.
Look at every robocar crash report filed in California, though, and you get a more mundane picture—but one that reveals a striking pattern. In September of this year, for example, three self-driving cars were sideswiped. Another three were rear-ended—one of them by a bicycle. And that’s not even the strangest one: In June, an AV operated by General Motors’ self-driving arm, Cruise, got bumped in the back—by a human driving another Cruise.
The people developing self-driving cars pitch them as a tool for drastically reducing the nearly 40,000 fatalities that hit US roads every year. Getting there will take years at least, decades probably, and that means a lot more time spent testing on public roads. And so these sorts crashes raise a few questions: What’s the best way to handle what could become a nationwide experiment in robotics and AI, where the public participants haven’t willingly signed on and the worst-case scenario is death?
We don’t have the answers. But chipping away at these questions starts with understanding the problem. And that means looking at the data.
Unfortunately, the publicly available data is quite limited. These are companies in a competitive field, and they don’t voluntarily share much in the way of details. They invite the press or public officials into their vehicles only in tightly controlled situations where they perform well. And anecdotal evidence of weaknesses—like The Information’s report that Waymo cars have trouble with left turns into traffic and frustrate human drivers—is well, anecdotal.
Of the states where most AV developers do their on-road testing—Arizona, California, Michigan, Nevada, and Pennsylvania—only the Golden State requires companies to report details about their programs. Once a year, they must submit a report to the DMV explaining how many miles they’ve driven and how often the human sitting in the car took the wheel. Anytime one of their cars is in any sort of collision, no matter how minor, the developer must submit a collision report within 10 business days explaining what happened.
Since these regulations took effect in 2014, the California DMV has received (and published) 104 autonomous-vehicle collision reports, including 49 so far in 2018, as more and more cars hit the streets. Most crashes are minor; few are newsworthy. But taken together, they present a picture of how these tests are progressing and how well robots are sharing the road. And they hint at a conclusion similar to what anecdotal evidence suggests—that these vehicles drive in ways humans might not expect, and might not want them to.
GM’s Cruise has filed by far the most reports in 2018, but don’t read too much into that. If the pattern holds from 2016 to 2017 (we won’t have full 2018 numbers until early next year), Waymo has been dialing down its testing in California in favor of Arizona. Cruise has been ramping it up and does its driving in the chaos of San Francisco. Waymo has the second-most collisions, followed by Zoox, a startup that also tests in the city.
These reports, written and filed by the companies running the cars, consist mostly of check boxes, with a line or two explaining what happened. Some detail thankfully freaky, presumably rare incidents: “The Cruise AV was struck by a golf ball from a nearby golf course.” Some reveal what we’ll call exasperation on the part of other road users: “The driver of the taxi exited his vehicle, approached the Cruise AV, and slapped the front passenger window, causing a scratch.”
Other sorts of crashes happen more frequently.
Drilling down into the data shows that autonomous vehicles being rear-ended accounts for 28 of the 49 filed reports, nearly two-thirds. Next is sideswipe collisions, with pedestrians, hitting objects, and “other” all trailing behind. (These categories are provided in check boxes on the DMV report form. The two pedestrian impacts reported are people approaching and hitting the cars.)
So let’s look at those rear-end crashes. Under state law, if someone hits you from behind, it’s their fault. And yes, today’s drivers are dangerously distracted, and no, it doesn’t take much of a mistake to knock into somebody in stop-and-go traffic.
But combine that with the fact that the computer was in charge in 22 of those 28 rear-end crashes, and you have reason to believe that the AVs are doing something that makes cars behind them more likely to hit them. Maybe that’s driving herkily-jerkily (as we experienced in a Cruise car in San Francisco in November 2017), or stopping for no clear reason (as we experienced in an Uber car in Pittsburgh last year). That’s not necessarily a bad thing. It indicates a conservative focus on safety: Better to stop for a fire hydrant than run down a preschooler. But part of being a good driver is behaving in a way others expect, which doesn’t include constantly stamping on the brakes.
The runner-up in terms of crash types is sideswipes, many of which appear to involve human drivers frustrated at getting stuck behind a slow or stopped AV, trying to overtake it, and not quite making it.
It’s not possible to say definitively that the AVs are driving like jerks, because the reports don’t ask the companies what percentage of miles the cars have covered in each mode. It could be there are fewer crashes in manual mode because the cars are hardly ever in that mode. “They’re autonomous vehicles, they spend most of their time being autonomous,” says Kyle Vogt, cofounder and CEO at Cruise. But Cruise declined a request to give exact numbers.
Cruise is highly represented in these reports because of the size of its fleet in San Francisco. That city is tough even for an experienced human driver to navigate, a land of knotty intersections, trolleys, cyclists, pedestrians, road work, steep hills, and aggressive drivers. Cruise says that helps it learn much faster than it does on the relatively simple, boring streets of Arizona, where it also tests. It says its cars encounter emergency vehicles 46 times more frequently in SF than in the Phoenix suburbs, and construction 39 times more often.
Researchers agree that AVs won’t get to a point where they can make driving safer without testing on public roads. But that brings up other questions, says Matthew Johnson-Roberson, who codirects the Ford Center for Autonomous Vehicles at the University of Michigan. “Should they all be allowed to be on public roads before passing some level of baseline performance?” he says. California requires companies to apply for the right to test AVs in public, but that doesn’t involve any kind of exam. “My personal advice is to treat the vehicles incredibly cautiously.”
Vogt says the California crash reports make clear that humans expect other humans to bend or break traffic rules, rolling through four-way intersections, accelerating to make a yellow light, or cruising over the speed limit. But his robots won’t follow suit.
“We’re not going to make vehicles that break laws just to do things like a human would,” he says. “If drivers are aware of fact that AVs are being lawful, and that’s fundamentally a good thing because it’s going to lead to safer roads, then I think there may be a better interaction between humans and AVs.”
So maybe the key there is awareness. The public would benefit from knowing more about these vehicles, how they work, how they’re tested, and how they’re likely to behave. That takes companies communicating openly and honestly about how development is going and how capable their cars are, rather than releasing their usual fare: glossy, edited videos or PR documents showing their tech at its best.
Short of requiring some sort of test, one easy change could be basic standards for how these vehicles are marked, alerting other road users to how they usually drive. Think of the stickers many countries require newer drivers to display in their vehicles, like the L for learners in the UK, or something like the signs on vans and trucks that say “This vehicle makes frequent stops,” “This vehicle stops at all railroad crossings,” or “Does not turn right on red.” The big tech companies might not like this kind of messaging, but it could help the other folks on the road adapt to their new robotic friends.
Self-driving cars aren’t necessarily going to be worse than your standard teenager, but they will certainly do things that humans wouldn’t. So if you do encounter one, don’t get distracted by your phone when you’re behind it. Give it a lot of stopping distance. Expect to see something weird. And hope to get a ride someday.

0 comments:

Post a Comment