A Blog by Jonathan Low

 

Nov 10, 2019

The Biggest Challenge For Self-Driving Cars Is Between Caution and Competitive Advantage

Self driving cars are far from perfect - and may remain that way for a long, long time. The impetus from investors, company executives and, to some extent, from an impatient public, is to hasten the testing.

Given the rising concerns about technology generally, the question is whether the ends justify the means. JL


Will Oremus reports in OneZero:

The real choice at this stage of self-driving car development is not between one innocent victim and other innocent victims; it’s between caution and competitive advantage. In a survey of Tesla owners, 13% said Autopilot has put them in a dangerous situation. But 28% said it has saved them from a dangerous situation. Accelerating the development of self-driving cars could be a net lifesaver, even if it means putting imperfect systems on the road. Which points to a moral conundrum far older than the trolley problem and more urgent in the context of today’s self-driving car industry: Do the ends justify the means?
The advent of self-driving cars revived the decades-old philosophical conundrum known as the “trolley problem.” The basic setup is this: A vehicle is hurtling toward a group of five pedestrians, and the only way to save them is to swerve and run over a single pedestrian instead.
For philosophers and psychologists, it’s pure thought experiment — a tool to tease out and scrutinize our moral intuitions. Most people will never face such a stark choice, and even if they did, studies suggest their reaction in the moment would have little to do with their views on utilitarianism or moral agency. Self-driving cars have given the problem a foothold in the real world. Autonomous vehicles can be programmed to have policies on such matters, and while any given car may never face a split-second tradeoff between greater or lesser harms, some surely will. It actually matters how their creators evaluate these situations.
Solving “the trolley problem” for self-driving cars has gotten a lot of attention. But it may actually be a distraction from a far more pressing moral dilemma. U.S. safety investigators released a report on a self-driving car crash this week that suggests the real choice at this stage of self-driving car development is not between one innocent victim and other innocent victims; it’s between caution and competitive advantage. Or, as the Guardian’s Alex Hern put it:
The report examines a March 2018 accident in which a self-driving Uber, with an inattentive test driver behind the wheel, hit and killed a woman crossing a street in Tempe, Arizona. An investigation by the National Transportation Safety Board (NTSB) shows that part of the problem was that Uber’s systems simply didn’t take into account the possibility of pedestrians jaywalking. So even though the car’s sensors detected the woman walking into the road long before the crash, it struggled to identify her as human and failed to predict that she would keep walking.
That seems like an egregious oversight, and one that should have been addressed in safe testing environments and simulations long before it happened in real life on a public street. The problems didn’t stop there.
When the system finally understood that the car was on a collision course with a pedestrian, 1.2 seconds before impact, it didn’t take emergency evasive action. Instead, per the NTSB report, it was programmed to wait a full second while calculating various options and alerting the human driver to take over. Uber told investigators this instruction was meant to minimize false alarms, in which the vehicle started braking or swerving when it wasn’t really necessary.
Even when the second had passed with no human intervention — the driver had apparently been watching The Voice instead of the road — there was a second mechanism that prevented the Uber from slamming on its brakes. It was programmed to do so only if it could avoid the collision entirely. If it was too late to avoid impact, it would alert the driver once more, and brake more gradually rather than doing everything it could to reduce the severity of the crash — presumably, again, to minimize false positives. It’s a calculus that Ars Technica’s Timothy B. Lee described as “sociopathic.”
That Uber put a self-driving car on the road before even considering jaywalkers suggests the company was more concerned with beating rivals such as Alphabet’s Waymo to market than minimizing harm. That its systems were more concerned with avoiding unnecessary safety measures like braking quickly than they were with avoiding potentially fatal collisions reinforces the point.
At the time, self-driving car companies were in the heat of a race to commercialize fully autonomous vehicles, which were initially tested on closed courses and in simulations. It wasn’t only a matter of pride: The prize was the data that could be gleaned from real-world situations, which was viewed as crucial to the technology’s development. Though Waymo had a head start, Uber was hell-bent on passing it. Its self-driving car chief, Anthony Levandowski, was shown in court documents to have put speed ahead of safety. When a Tesla on Autopilot killed its driver in 2016, he reportedly told his team he was “pissed we didn’t have the first death.” (Lewandowski denies saying this.)
The inherent premise of the trolley problem is that the goal is to do what’s morally right. The conundrum lies in how to weigh the interests and rights of the potential victims against one another. But what if the trolley conductor isn’t thinking much about the victims at all? What if he is almost solely focused on which track gets him to the next station ahead of schedule? That’s outside the scope of the classic thought experiment, but an apt analogy for Uber’s moral calculus.
We could mark Uber’s oversight of jaywalkers down as a simple case of choosing might over right — the product of a singularly unscrupulous company, one driven to dominate by any means necessary. And given that Uber’s fatal accident led it to suspend its autonomous vehicle program, we might happily conclude that justice was served and caution rewarded after all.
It isn’t quite that simple, however. While Uber has been sidelined, Waymo, Tesla, and other rivals continue to face the same class of tradeoff between speed and safety, and it isn’t all going smoothly. States such as Arizona are similarly weighing the risks of accidents against the potential economic gains of becoming hubs for self-driving car testing. If anything, the best model for safe autonomous vehicle development might be found not in Silicon Valley but in China, whose government has been much stingier with regulatory approvals.
Self-driving cars as a sector, meanwhile, are competing with human-operated vehicles that make indefensible choices and kill people every day. Tesla CEO Elon Musk, in particular, has long justified his company’s aggressive implementation of Autopilot technology by comparing its accident rate to that of the country at large.
In a new survey of Tesla owners by Bloomberg, 13% said Autopilot has put them in a dangerous situation. But 28% said it has saved them from a dangerous situation. While studies don’t entirely support Musk’s bold claims about Autopilot’s safety, there is at least a theoretical case to be made that accelerating the development of self-driving cars could be a net lifesaver, even if it means putting imperfect systems on the road. Though even if that were true, it’s not clear that it would excuse the ways in which Tesla has oversold Autopilot’s capabilities, leading the public to think it’s safer than it really is.
All of which points to a moral conundrum far older than the trolley problem, and more urgent in the context of today’s self-driving car industry: Do the ends justify the means?
The trolley problem is appealing in its simplicity: It neatly captures the tension between conflicting moral frameworks. But when it comes to self-driving cars and safety, for the time being, the trolley problem is the least of our problems.

0 comments:

Post a Comment