A Blog by Jonathan Low

 

Dec 16, 2019

AI Research Institute Recommends Emotion Recognition Be Banned

The scientific basis underpinning this sort of analysis is suspect and, when used anyway, it appears to  amplify biases and inaccuracies.

But it's tough to stuff those tech jinis back in the bottle. JL


Charlotte Jee reports in MIT Technology Review:

There’s little scientific basis to emotion recognition technology, so it should be banned from use in decisions that affect people’s lives. There’s also evidence emotion recognition can amplify race and gender disparities. A recent study by the Association for Psychological Science, which spent two years reviewing more than 1,000 papers on emotion detection concluded it’s very hard to use facial expressions alone to accurately tell how someone is feeling.
There’s little scientific basis to emotion recognition technology, so it should be banned from use in decisions that affect people’s lives, says research institute AI Now in its annual report.
A booming market: Despite the lack of evidence that machines can work out how we’re feeling, emotion recognition is estimated to be at least a $20 billion market, and it’s growing rapidly. The technology is currently being used to assess job applicants and people suspected of crimes, and it’s being tested for further applications, such as in VR headsets to deduce gamers’ emotional states.
Further problems: There’s also evidence emotion recognition can amplify race and gender disparities. Regulators should step in to heavily restrict its use, and until then, AI companies should stop deploying it, AI Now said. Specifically, it cited a recent study by the Association for Psychological Science, which spent two years reviewing more than 1,000 papers on emotion detection and concluded it’s very hard to use facial expressions alone to accurately tell how someone is feeling. 
Other concerns: In its report, AI Now called for governments and businesses to stop using facial recognition technology for sensitive applications until the risks have been studied properly, and attacked the AI industry for its “systemic racism, misogyny, and lack of diversity.” It also called for mandatory disclosure of the AI’s industry environmental impact.

0 comments:

Post a Comment