A Blog by Jonathan Low

 

Sep 21, 2020

Why YouTube Had To Replace AI With Humans For Pandemic Content Moderation

As pandemic lockdowns began in March, YouTube moved to hand more authority to algorithms in judging the appropriateness of content on its site.

But it soon discovered that machines and software do not possess the subtle and nuanced intelligence for the task. JL

Alex Barker and Hannah Murphy report in the Financial Times:

YouTube has reverted to using human moderators to vet harmful content after the machines it relied on during lockdown proved to be overzealous censors. When some of YouTube’s 10,000-strong team filtering content were “put offline” by the pandemic, YouTube gave its machine systems greater autonomy to stop users seeing hate speech, violence or other harmful content or misinformation. Reducing human oversight (caused) a jump in the number of videos removed. Human evaluators “make decisions that tend to be more nuanced in areas like hate speech, medical misinformation or harassment.”
Google’s YouTube has reverted to using more human moderators to vet harmful content after the machines it relied on during lockdown proved to be overzealous censors of its video platform. When some of YouTube’s 10,000-strong team filtering content were “put offline” by the pandemic, YouTube gave its machine systems greater autonomy to stop users seeing hate speech, violence or other forms of harmful content or misinformation. But Neal Mohan, YouTube’s chief product officer, told the Financial Times that one of the results of reducing human oversight was a jump in the number of videos removed, including a significant proportion that broke no rules. Almost 11m were taken down in the second quarter between April and June, double the usual rate. “Even 11m is a very, very small, tiny fraction of the overall videos on YouTube . . . but it was a larger number than in the past,” he said.
“One of the decisions we made [at the beginning of the pandemic] when it came to machines who couldn’t be as precise as humans, we were going to err on the side of making sure that our users were protected, even though that might have resulted in s slightly higher number of videos coming down.” A significantly higher proportion of machine-led takedown decisions were overturned on appeal. About 160,000 videos were reinstated, half the total number of appeals, compared with less than 25 per cent in previous quarters. The acknowledgment sheds light on the crucial relationship between the human moderators and artificial intelligence systems, who vet the material flowing into the internet’s biggest platform for user-generated videos. Amid widespread anti-racism protests and a polarising US election campaign, social media groups have come under increasing pressure to better police their platforms for toxic content. In particular, YouTube, Facebook and Twitter have been updating their policies and technology to stem the growing tide of election-related misinformation, and to prevent hate groups from stoking racial tensions and inciting violence. Failing to do so risks advertisers taking their business elsewhere; already an advertising boycott against Facebook in July was expanded by some brands to include YouTube. As part of its efforts to address misinformation, YouTube will this week be rolling out a fact-checking feature in the UK and Germany, expanding a machine-triggered system first used in India and the US. Fact-check articles will be automatically triggered by specific searches on breaking news or topical issues that fact-checking services or established publishers have chosen to tackle.
Mr Mohan said that while YouTube’s machines were able to provide such functions, and rapidly remove clear-cut cases of harmful content, there were limits to their abilities. While algorithms were able to identify videos that might potentially be harmful, they were often not so good at deciding what should be removed. “That’s where our trained human evaluators come in,” he said, adding that they took videos highlighted by machines and then “make decisions that tend to be more nuanced, especially in areas like hate speech, or medical misinformation or harassment.” The speed at which machines can act in addressing harmful content is invaluable, said Mr Mohan. “Over 50 per cent of those 11m videos were removed without a single view by an actual YouTube user and over 80 per cent were removed with less than 10 views. And so that’s the power of machines,” he said. Claire Wardle, co-founder of First Draft, a non-profit group addressing misinformation on social media, said that artificial intelligence systems had made progress in tackling harmful graphic content such as violence or pornography. “But we are a very long way from using artificial intelligence to make sense of problematic speech [such as] a three-hour rambling conspiracy video,” she said. “Sometimes it is a nod and a wink and a dog whistle. [The machines] just can’t do it. We are nowhere near them having the capacity to deal with this. Even humans struggle.”

0 comments:

Post a Comment