A Blog by Jonathan Low

 

Jun 16, 2018

Is It Human Error When a Robot Screws Up?

Electronic kiosks, robots, systems and their like may well be more efficient - and certainly more docile - than the humans they are replacing.

But it is increasingly evident that despite the creation of an entire field called 'human factor engineering' (or, how to anticipate in how many different ways humans will screw up their interactions with technology) that there does come a point at which said humans have a right to wonder whether a system - or perhaps the humans who designed it - incapable of understanding how to make something faster and easier is maybe the culprit instead of the customer. JL


Abigail Shrier comments in the Wall Street Journal:

News reports talk of “smart” cities, cars, classrooms. Everything seems smart to us, except us. Whenever I stumble at these tasks, some employee on the verge of being supplanted by an AI-enhanced screen hustles over to tell me I’m “doing it wrong.” As if I were the idiotic new hire instead of a customer. “Human error” presupposes its own standard of judgment: error-free robots.
When my family landed in Israel a few weeks ago, we headed straight to our kids’ favorite Jerusalem vacation destination: kosher McDonald’s . Since we observe Jewish dietary laws, eating at McDonald’s in the U.S. is strictly off-limits. But in Israel our kids have the chance to enjoy an iconic part of the American experience, to march up to the counter and order a Big Mac just like the people do in all those sunny commercials.
“Go ahead,” I said, nudging them toward the register. “Order in Hebrew,” I whispered, desperate to see our private-school tuition put to good use. A stocky Russian Israeli, with a trademark McDonald’s visor crushing straw-colored hair, waited behind the counter. He pointed to a nearby digital kiosk. “You must order at the computer,” he said.
We trudged over to a screen. I fumbled with the icons, full of irritation, while the kids changed their minds about their orders, until we ended up with nothing we really wanted in our digital basket. Sensing my frustration, the employee came over and handled the order for us. “It’s not hard,” he said, not unkindly.
He meant something like: You’ll get the hang of it, eventually. Apparently so. Last week McDonald’s announced it would roll out similar ordering kiosks all across America, “upgrading” 1,000 stores each quarter. Other restaurants likely won’t be far behind.
My pathetic efforts to work the self-serve kiosk are what technologists refer to as “the human factor,” by which they mean stupid things people do. We make mistakes, require sleep, become emotional, engender guilt and obligation, engage in small talk, demand raises, answer back.
But don’t computers make mistakes? Last month, Amazon Echo, the AI-enhanced speaker that answers requests like a plastic genie, misconstrued one family’s private talk as a prompt. It recorded their conversation and sent it to a friend.
“That’s because some human programmed it to get triggered by a specific word,” one software CEO in Los Angeles explained to me. “But the appeal of the algorithm is it really just does what it’s supposed to do.” When a person makes a mistake, it’s “human error”; when a machine does, by definition that’s “human error,” too. Is it any wonder we have such a low opinion of us?
Every technologist I’ve talked to in recent weeks shares, more or less, this view: Robots are better at doing things. This accounts for their enthusiasm for autonomous vehicles. It explains their love of cryptocurrencies, which are untethered to any central bank and virtually unregulable, as well as their related love of blockchain, a distributed ledger designed to exist free from the burden of having to trust any human being to keep it.
As if succumbing to a technological Stockholm syndrome, even nonfuturists have begun apologizing for the sin of humanity, accepting the superiority of their new robotic overlords, planning their own obsolescence. School time is squandered for activism, as if it no longer matters what children learn, as if the only thing they’ll have left to offer the world is a vote. Society is newly eager to legalize marijuana, as if sober labor is already of diminished worth. Nearly half of Americans now support a universal basic income, a plan by which government would keep citizens fed like pets, rewarded merely for being alive. News reports talk of “smart” cities, cars, classrooms. Everything seems smart to us, except us.
Perhaps the greatest philosopher of the 20th century, Ludwig Wittgenstein, argued in his canonical work, “Philosophical Investigations,” that complaints about the imprecision of language actually presuppose an accepted standard for judging it. In his day, that unnamed standard was symbolic logic, which virtually all philosophers regarded as a perfected form of thought and reasoning. The mistake they’d made, according to Wittgenstein, was to assume that symbolic logic was the correct standard for judging human language. In fact, he argued, language is exactly as precise as people need it to be. Philosophers were creating the problem.
The same might be said of “human error,” which presupposes its own standard of judgment: error-free robots. True, people make imperfect cashiers, drivers and administrators. We mistype orders; we run red lights.
We also show up late to work because a child is sick, because we forgot to set an alarm, because we stayed up partying the night before. Each of these reflects a choice we might make: To put our children ahead of our jobs. To forgive ourselves a character flaw. To indulge in vices perhaps we shouldn’t. Every one is part of the soaring glory of being human, with all the exasperating effects and limitations. But this is a feature, not a flaw. No human taint inheres in the occasional failure of judgment. We remake our lives each day, acting on values, priorities and fondest wishes—and then upend it all tomorrow by changing our minds.
The past decade has brought remarkable change. The internet soaks us daily with unimaginable floods of information. I no longer worry about navigating the streets of unfamiliar cities, and I can rely on Google to translate foreign text. Automated bill payment feels like a dream. Online shopping is nothing short of a miracle; even the thought of scraping hangers across a rack now makes me reach for an aspirin.
But I’ve also learned to scan my own airplane ticket and check out my own groceries. I’ve been harassed by numberless robocalls. Now I’ve rung up myself at kosher McDonald’s. Most of these tasks I’ve dispatched poorly—and always with pique. None have I enjoyed. Worse, I’ve watched elderly and disabled people struggle to manage them, in the absence of sympathetic human help.
This isn’t a “fault” of the robots, exactly, but it also isn’t a “fault” of humans to want a little empathy. Whenever I stumble at these tasks, as I did most recently at the “smart” border control in Toronto’s Pearson Airport, some employee on the verge of being supplanted by an AI-enhanced screen hustles over to tell me I’m “doing it wrong.” As if I were the airport’s idiotic new hire instead of its customer.
Kiosks will be great at fast-food restaurants, the L.A. software exec assured me, because “you can just click on the things that you want, and you get your order, and from a manager’s standpoint, that’s so much easier than a human being that sometimes doesn’t show up, sometimes shows up late, sometimes has attitude, now has a much higher minimum salary.”
He’s probably right. Not everyone heads to McDonald’s for the repartee. But Americans didn’t used to see everything from the perspective of managers, to say nothing of the tech barons whose devices now herd, tag, censor and track people like packages. We once saw things from the perspective of the consumer. Businesses insisted that “the customer is always right.” Antitrust laws blocked monopolies for the “benefit of consumers.”
Aristotle called man a “social animal,” and the titans of Facebook , Instagram and Twitter have capitalized on that trait. But we’re subjective creatures, too. We like to know that the people on the other side have some idea of the situation we’re in, and that they might, knowing this, offer the smile or suggestion we were hoping for. We’re really good at that.

1 comments:

Post a Comment