As it happens, Foot offers a different example that shares more in common with what actually transpired in Tempe than the trolley does. Imagine five patients in a hospital. Their lives could be saved by being administered a certain gas, but the use of it releases lethal fumes into the room of another patient, who cannot be moved. In this case, the calculus of effect is identical to the classic trolley problem, and yet, to many the conclusion is not nearly so obvious. That’s just because of a difference between intended and foreseeable effect, but also because the moral desire to avoid causing injury operates differently.
In the trolley problem, the driver is faced with a conflict between two similar harms, neither of which he or she chooses. But in the hospital-gas example, the doctor is faced with a conflict between delivering aid and causing harm. In truth, Uber’s situation is even more knotted, because none of the parties involved seemed to possess sufficient knowledge of the vehicle’s current (not future) capacity for doing harm—not the company that makes the car, the driver who operates it, or the government that regulates it. That makes the moral context for the Uber crash less about the future of vehicular casualty, and more about the present state of governmental regulation, corporate disclosure, and transportation policy. But those topics are far less appealing to think about than a runaway trolley is.

If it’s a precedent in moral philosophy that technologists, citizens, and policy makers really want, they might do better to look at Uber’s catastrophe as an example of moral luck, an idea proposed by the philosopher Thomas Nagel. Here’s a classic example: An intoxicated man gets in his car to drive home at night. Though drunk, he reaches his destination without incident. Now consider a man in the same circumstances, but while driving he strikes and kills a child who crosses the road unexpectedly. It seems natural to hold the latter man more blameworthy than the former, but both took the same voluntary actions. The only difference was the outcome.
Seen in this light, the Uber fatality does not represent the value-neutral, or even the righteous, sacrifice of a single pedestrian in the interest of securing the likely safety of pedestrians in a hypothetical future of effective, universally deployed robocars. Instead, it highlights the fact that positive outcomes—safer cars, safer pedestrians, and so on—might just as easily be functions of robocars’ moral luck in not having committed a blameworthy act. Until now, of course.
Moral luck opens other avenues of deliberation for robocars, too. In the case of self-driving cars, voluntary action is harder to pin down. Did the Uber driver know and understand all the consequences of their actions? Is it reasonable to assume that a human driver can intervene in the operation of a machine he or she is watching, and not actively operating? Is Uber blameworthy even though the State of Arizona expressly invited experimental, autonomous-car testing on real roads traversed by its citizenry? All of these questions are being asked now, in Arizona and elsewhere. But that’s cold comfort for Elaine Hertzberg. The point of all this isn’t to lay blame or praise on particular actors in the recent Uber pedestrian collision. Nor is it to celebrate or lament the future of autonomous vehicles. Rather, it is to show that much greater moral sophistication is required to address and respond to autonomous vehicles, now and in the future.
Ethics isn’t a matter of applying a simple calculus to any situation—nor of applying an aggregate set of human opinions about a model case to apparent instances of that model. Indeed, to take those positions is to assume the utilitarian conclusion from the start. When engineers, critics, journalists, or ordinary people adopt the trolley problem as a satisfactory (or even just a convenient) way to think about autonomous-vehicle scenarios, they are refusing to consider the more complex moral situations in which these apparatuses operate.
For philosophers, thought experiments offer a way to consider unknown outcomes or to reconsider accepted ides. But they are just tools for thought, not recipes for ready-made action. In particular, the seductive popularity of the trolley problem has allowed people to misconstrue autonomous cars as a technology that is already present, reliable, and homogeneous—such that abstract questions about their hypothetical moral behavior can be posed, and even answered. But that scenario is years away, if it ever comes to pass. In the meantime, citizens, governments, automakers, and technology companies must ask harder, more complex questions about the moral consequences of robocars today, and tomorrow. It’s time to put the brakes on the trolley before it runs everyone down.