Why New Research Reports AI Use Viewed As "Lazy, Incompetent" By Co-Workers
A brand new study from the US National Academy of Sciences reveals that those who use AI at work are judged as incompetent and unmotivated by co-workers.
While this may seem surprising in an economy where technological adaptation has been viewed as a sign of intelligence and entrepreneurial acumen, the negative bias follows an historical pattern consistent with the introduction of new technologies. In other words, it may have more to do with resentment and jealousy than actual performance. But the short term reputational impacts could slow adoption, meaning tech firms need to address them. JL
Benj Edwards reports in ars technica:
The National Academy of Sciences on Thursday published a study showing that employees who use AI tools like ChatGPT, Claude and Gemini at work face negative judgments about their competence and motivation from colleagues and managers. The findings reveal reveal a consistent pattern of bias against those who receive help from AI. (And) the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one." Managers who didn't use AI themselves were less likely to hire candidates who regularly used AI tools. Managers who use AI favor AI-using candidates. Similar concerns of stigma have historically accompanied new technologies. People have long worried that labor-saving tools might reflect poorly on users' abilities.
Using AI can be a double-edged sword, according to new research from Duke University. While generative AI tools may boost productivity for some, they might also secretly damage your professional reputation.
On Thursday, the Proceedings of the National Academy of Sciences (PNAS) published a study showing that employees who use AI tools like ChatGPT, Claude, and Gemini at work face negative judgments about their competence and motivation from colleagues and managers.
"Our findings reveal a dilemma for people considering adopting AI tools: Although AI can enhance productivity, its use carries social costs," write researchers Jessica A. Reif, Richard P. Larrick, and Jack B. Soll of Duke's Fuqua School of Business.
The Duke team conducted four experiments with over 4,400 participants to examine both anticipated and actual evaluations of AI tool users. Their findings, presented in a paper titled "Evidence of a social evaluation penalty for using AI," reveal a consistent pattern of bias against those who receive help from AI.
What made this penalty particularly concerning for the researchers was its consistency across demographics. They found that the social stigma against AI use wasn't limited to specific groups.
Fig. 1 from the paper "Evidence of a social evaluation penalty for using AI." Credit: Reif et al.
"Testing a broad range of stimuli enabled us to examine whether the target's age, gender, or occupation qualifies the effect of receiving help from Al on these evaluations," the authors wrote in the paper. "We found that none of these target demographic attributes influences the effect of receiving Al help on perceptions of laziness, diligence, competence, independence, or self-assuredness. This suggests that the social stigmatization of AI use is not limited to its use among particular demographic groups. The result appears to be a general one."
The hidden social cost of AI adoption
In the first experiment conducted by the team from Duke, participants imagined using either an AI tool or a dashboard creation tool at work. It revealed that those in the AI group expected to be judged as lazier, less competent, less diligent, and more replaceable than those using conventional technology. They also reported less willingness to disclose their AI use to colleagues and managers.
The second experiment confirmed these fears were justified. When evaluating descriptions of employees, participants consistently rated those receiving AI help as lazier, less competent, less diligent, less independent, and less self-assured than those receiving similar help from non-AI sources or no help at all.
Fig. 3 from the paper "Evidence of a social evaluation penalty for using AI." Credit: Reif et al.
The researchers discovered this bias affects real business decisions. In a hiring simulation, managers who didn't use AI themselves were less likely to hire candidates who regularly used AI tools. However, managers who frequently used AI showed the opposite preference, favoring the AI-using candidates.
The final experiment revealed that perceptions of laziness directly explain this evaluation penalty. The researchers found this penalty could be offset when AI was clearly useful for the assigned task. When using AI made sense for the job, the negative perceptions diminished significantly.
Notably, the study showed that evaluators' own experience with AI significantly influenced their judgments. In the study, those who used AI frequently were less likely to perceive an AI-using candidate as lazy.
A complicated picture
The Duke AI study also notes that similar concerns of stigma have historically accompanied other new technologies. From Plato questioning whether writing would undermine wisdom, to modern debates about calculators in education, people have long worried that labor-saving tools might reflect poorly on users' abilities.
Reif and colleagues suggest this social impact may present a hidden barrier to AI adoption in workplaces. Even as organizations push AI implementation, individual employees might resist due to concerns about how they'll be perceived.
That dilemma is apparently already here. In August last year, while covering ChatGPT hitting 200 million active weekly users, we mentioned that Wharton professor Ethan Mollick (who frequently researches AI) called people who use AI without telling their bosses "secret cyborgs." Because many companies ban the use of AI outputs, many workers have anecdotally turned to secret AI use.
And if that doesn't complicate the picture enough, we previously covered a study that adds a different layer to AI workplace issues. Research from economists at the University of Chicago and University of Copenhagen found that while 64–90 percent of workers reported time savings from AI tools, these benefits were sometimes offset by new tasks created by the technology. The study revealed that AI tools actually generated additional work for 8.4 percent of employees, including non-users tasked with checking AI output quality or detecting AI use in student assignments.
So, although using AI to do some tasks potentially saves time, employees may be creating additional work for themselves or others. In fact, the World Economic Forum's Future of Jobs Report 2025 suggested that AI may create 170 million new positions globally while eliminating 92 million jobs, resulting in a net gain of 78 million jobs by 2030. So the picture is complicated, but it appears the impact of AI on work is ongoing at a steady pace.
As a Partner and Co-Founder of Predictiv and PredictivAsia, Jon specializes in management performance and organizational effectiveness for both domestic and international clients. He is an editor and author whose works include Invisible Advantage: How Intangilbles are Driving Business Performance. Learn more...
0 comments:
Post a Comment