A Blog by Jonathan Low

 

Mar 21, 2019

Facebook Admits Its AI Failed To Stop New Zealand Mosque Video From Going Viral

The most momentous lessons from this incident are not that bad people manipulate social media to their own ends and that Facebook is incapable of stopping it - which increasing numbers of people realize - but that artificial intelligence is not the panacea Facebook and others claim it to be (yet, and may never be).

And that in its heedless quest to dominate the internet despite chronic red flags, such tools as Live Video may become financial liabilities whose cost in terms of stock price performance, regulatory demands and fines could begin to outweigh the economic benefits of audience use and growth. JL


Niharika Mandhana reports in the Wall Street Journal:

Criticism focused on Facebook’s live broadcast tool, ask(ing) whether Facebook should be allowed to offer such services if it can’t control them.The company’s artificial intelligence tools failed to catch the video of the terrorist attack in Christchurch aired live on the platform by the shooter. The video wasn’t prioritized for expedited review because the user flagged the video after it ended, not during the live broadcast. Facebook accelerates its reviews only if there is a report of a suicide attempt. Failure of its systems to catch the video of the massacre as it was being streamed and for many minutes afterward underscored the major gaps in its strategies.
Facebook acknowledged that the gruesome video of the New Zealand mosque shootings revealed gaps in its handling of live broadcasts by users, but pushed back against the idea of setting up a time delay.

Guy Rosen, Facebook’s vice president for integrity, said in a post late Wednesday that the company’s artificial intelligence tools had failed to catch the video of the terrorist attack in Christchurch last week that was aired live on the social media platform by the shooter. The 17-minute video shows men, women and children being gunned down in a mosque. Mr. Rosen also said the video wasn’t prioritized for an expedited review when it was flagged by a user. That is because the user flagged the video after it ended, not during the live broadcast. In those cases, Facebook accelerates its reviews only if there is a report of a suicide attempt.

Facebook ultimately took down the video after being alerted by New Zealand authorities. By then the video had been viewed 4,000 times on the site, Facebook said, and copied on other sites beyond Facebook’s control.

“We recognize that the immediacy of Facebook Live brings unique challenges,” Mr. Rosen said in the post. The shooter, Brenton Tarrant, has been charged in New Zealand with murder. He hasn’t entered a plea.

The details of the way in which the video was uploaded, spread and detected show the limitations of Facebook’s efforts to police its platform. Critics say the site has expanded aggressively for years and offered new features without putting adequate safeguards in place. The social media giant has come under fire in many parts of the world—from Myanmar to Sri Lanka—for failing to take action against hate speech, fake news and misinformation.

The company says it is doing more to better moderate the large numbers of posts that go online. It has doubled down on AI as a solution, in addition to using some 15,000 human content reviewers. But failure of its systems to catch the video of the New Zealand massacre as it was being streamed and for many minutes afterward has underscored the major gaps in its strategies.

Criticism after the attack has focused on Facebook’s live broadcast tool. Australian Prime Minister Scott Morrison asked over the weekend whether Facebook should be allowed to offer such services if it can’t control them.
Some critics have called on the site to impose a time delay during which videos could be checked. Mr. Rosen said a delay would be swamped by the millions of live videos broadcast daily. He also said it would slow reports by users that could help authorities provide help on the ground.

Facebook has touted the success of its technical tools in tackling some kinds of terrorist content. The company said last year that Facebook itself—not its users—was identifying nearly all of the material related to Islamic State and al Qaeda that it removed. But the platform, accessed each month by some 2 billion users posting in more than 80 languages, hasn’t had that success with other kinds of extremist content.In his post, Mr. Rosen said artificial intelligence has worked well to identify nudity and terrorist propaganda, because there are many examples of such content to train the machines. But the AI didn’t have enough material to learn to recognize content like the mosque massacre, he said, because “these events are thankfully rare.”

A user flagged the Christchurch video to Facebook 12 minutes after it ended, Mr. Rosen said. The social media site accelerates its review of any live video flagged by users while it is being broadcast. The policy also applies to videos that have recently ended, but only in the case of suicide reports. The Christchurch video wasn’t reported as a suicide and as a result wasn’t prioritized.

The company will re-examine its procedures and expand the categories for accelerated review, Mr. Rosen said.

Facebook’s systems caught 1.2 million videos of the attack as they were being uploaded in the first 24 hours; another 300,000 made it to the site and were removed afterward. But users were able to make copies of the video on other sites to keep it alive.

Mr. Rosen said a user of the website 8chan, which is often home to white supremacist and anti-Muslim content, posted a link to a copy of the video on a file-sharing site from where it spread more broadly.

Facebook struggled to keep the video from reappearing on its own site, he said, because a “core community of bad actors” worked to continually re-upload edited versions to defeat the company’s detection tools. Facebook found and blocked more than 800 variants of the video that were circulating, he said.


0 comments:

Post a Comment