A Blog by Jonathan Low

 

Mar 1, 2021

Is Emotional Recognition For the Workplace Based On Inaccurate Science?

A meta-review of popular theories about 'seven universal emotions' found only 20% to 30% of humans' expected facial expressions were scientifically linked actual emotional state. 

The challenge for organizational psychologists is not just assuring or rejecting technology which may provide a misleading view of employee wellness, but the evaluating the larger practical and ethical questions of why technology is being employed for this purpose in the workplace. JL

Dave Gershgorn reports in OneZero:

Most  emotion recognition technology (is) based on the work of a psychologist who published work on the similarities between facial expressions around the world and “seven universal emotions.” (But) a meta-review of 1,000 studies found the science tying facial expressions to emotions isn’t universal. People make the facial expression to match their emotional state only 20% to 30% of the time. Still, this technology is being pushed on those who don’t have the power to refuse it (like) job candidates performing virtual interviews and Amazon workers with cameras on them. Why are entities using technology to make assessments about character on the basis of physical appearance in the first place?

Facial recognition isn’t just for verifying a person’s identity. In recent years, researchers and startups have focused on other ways to apply the technology, like emotion recognition, which tries to read facial expressions to understand what a person is feeling.

For instance, Find Solution AI, a company based in Hong Kong that was recently featured in CNN Business, is selling its technology to schools and colleges, where it scans students’ faces and monitors their feelings in virtual classrooms. Theoretically, systems like these could detect whether children are paying attention or expressing frustration that indicates difficulty with learning the class material.

Academics and A.I. ethics researchers, however, are quick to point out that this technology relies on questionable science and that there are serious ethical concerns around who the technology is used to surveil.

Kate Crawford, co-founder of the AI Now Institute and senior principal researcher at Microsoft Research, pushed back on Find Solution AI’s claims that its technology could tell what children were feeling.

Find Solution AI, and most other emotion recognition startups, base their technology on the work of Paul Ekman, a psychologist who published popular work on the similarities between facial expressions around the world and popularized the idea of “seven universal emotions.” Actor Tim Roth even played a dramatized version of Ekman in the Fox drama Lie to Me.

That research has not translated well into the real world. A TSA program that trained agents to spot terrorists using Ekman’s work found little scientific basis, didn’t result in arrests, and fueled racial profiling, according to reports from the Government Accountability Office and the ACLU.

A meta-review of 1,000 studies found that the science tying our facial expressions to our emotions isn’t entirely universal. People make the expected facial expression to match their emotional state only 20% to 30% of the time, the researchers said.

But this technology is still being pushed on those who don’t have the power to refuse it. Children in virtual classrooms, job candidates performing virtual interviews, Amazon workers with cameras on them while they deliver packages, and even on people being questioned by police.

“We need to scrutinize why entities are using faulty technology to make assessments about character on the basis of physical appearance in the first place,” researchers from AI Now wrote in their 2019 report.

0 comments:

Post a Comment