A Blog by Jonathan Low

 

May 28, 2016

Why People Dump AI Advisors That Give Bad Advice But Forgive Humans For Same

Device-ism, the bias against machines. You heard it here first. JL

Michael Coren reports in Quartz:

Efficiency gains from automation in the workplace could be lost if people lose trust and stop using these systems. While people have come to expect nearly flawless performance from basic automated systems. (But) participants felt they held “more in common” with human than automated advisors: a common sense of being imperfect, a willingness to self-correct, and a sense of wanting to do well. Artificial intelligence can be programmed to exhibit these traits
We accept that to err is human. Not so with machines. When our electronic counterparts fail us—whether its baggage screening software or the latest artificial intelligence—we are quick to shun their advice in the future. That has big implications as machines infiltrate the workplace, offering services once provided by human colleagues.
University of Wisconsin researchers recently sought to test how we might get along with our future AI coworkers. The researchers asked 160 college undergraduates to forecast scheduling for hospital rooms, an unfamiliar task. They relied on advice from either an “advanced computer system” or a human specialist in operating room management. After seven of 14 scheduling trials, participants were given faulty advice from their respective advisors.
People reported equal trust in both sources at the outset of the experiment, but that quickly changed after the error. When the computer advisor made a mistake, many participants abandoned it and ignored its advice in subsequent trials. Computer consultations fell from 70% to less than 45%. Human advisors, on other other hand, were essentially forgiven. Researchers saw only about a 5% drop in how often participants relied on their human advisor after the error.
Andrew Prahl, one of the study’s authors and a researcher at the university, says their findings suggest that potential efficiency gains from automation in the workplace could be lost if people lose trust and stop using these systems. While people have come to expect nearly flawless performance from basic automated systems that run our laptops and process credit cards, we are entering an era when computers will make forecasts and judgments for us. Artificial intelligence is already making legal judgments, recommending investments, and making inroads in other professional fields. The advice these algorithms offer, however, deals with an inherently uncertain world. They will be wrong far more often than our current expectations for automated systems.
“This is more a psychological problem for the users of such systems,” wrote Prahl by email. As people transpose their expectations from more basic automated systems into the realm of judgment and forecast, employees “need to understand that this system, like their colleague, cannot be right all of the time. So managers should be encouraging their employees to forgive the automation when it makes a mistake (as funny as that may sound).”
Prahl and his co-author Lyn Van Swol are now trying to understand what psychological forces underlie this phenomenon. Participants in his study reported they felt they held “more in common” with the human advisor than the automated advisor, but exactly what characteristics people believe they share are unclear. Previous research suggests a common sense of being imperfect, a willingness to self-correct after mistakes, and a sense of wanting to do well were all candidates. Prahl believes artificial intelligence can be programmed to exhibit some of these traits, and plans to tackle this research next.
The findings will be presented in June at the 66th Annual Conference of the International Communication Association in Fukuoka, Japan

0 comments:

Post a Comment