A Blog by Jonathan Low

 

Jan 9, 2020

Is Facebook's Supposed 'Deepfake Ban' Yet Another Semantic Evasion Of Responsibility?

Yes, of course. That's is what Facebook does in response to all crises.

The company has exempted politicians, which is where much of the current abuse and concern lies. Plus, the policy is dependent on the company's definition of what constitutes a deepfake, which, given its history, is unlikely to assuage any fears. JL

David McCabe and Davey Alba report in the New York Times:

Facebook’s new policy was not meant “to fix the very real problem of disinformation that is undermining faith in our electoral process, but is instead an illusion of progress.
Banning deepfakes should be an incredibly low floor in combating disinformation. ”
Facebook’s automated systems for detecting such videos would have limited reach, and there would be “significant incentive” for people to develop fakes that fool Facebook’s systems.“The question is where you draw the line. It raises the question of intent and semantics.”
Facebook says it will ban videos that are heavily manipulated by artificial intelligence, the latest in a string of changes by the company to combat the flow of false information on its site.
A company executive said in a blog post late Monday that the social network would remove videos, often called deepfakes, that artificial intelligence has altered in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” The videos will also be banned in ads.
The policy will have a limited effect on slowing the spread of false videos, since the vast majority are edited in more traditional ways: cutting out context or changing the order of words. The policy will not extend to those videos, or to parody or satire, said the executive, Monika Bickert.
Ms. Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. Content that is found to be factually incorrect appears less prominently on the site’s news feed and is labeled false.
But the announcement underscores how Facebook, by far the world’s largest social network, is trying to thwart one of the latest tricks used by purveyors of disinformation ahead of this year’s presidential election. False information spread furiously on the platform during the 2016 campaign, leading to widespread criticism of the company.
By banning deepfakes before the technology becomes widespread, Facebook is trying to calm lawmakers, academics and political campaigns who remain frustrated by how the company handles political posts and videos about politics and politicians.
But some Democratic politicians said the new policy did not go nearly far enough. Last year, Facebook refused to take down a video that was edited to make it appear that Speaker Nancy Pelosi was slurring her words. At the time, the company defended its decision despite furious criticism, saying that it had subjected the video to its fact-checking process and had reduced its reach on the social network.
The new policy, though, does not apply to the video of Ms. Pelosi. Disinformation researchers have referred to similar videos as “cheapfakes” or “shallowfakes,” or deceptive content edited with simple video-editing software, in contrast to the more sophisticated deepfake videos generated by artificial intelligence.
Ms. Pelosi’s deputy chief of staff, Drew Hammill, said in a statement that Facebook “wants you to think the problem is video-editing technology, but the real problem is Facebook’s refusal to stop the spread of disinformation.”
Facebook would also keep up a video, widely circulated last week, in which a long response that former Vice President Joseph R. Biden Jr. gave to a voter in New Hampshire was heavily edited to wrongly suggest that he made racist remarks.
Bill Russo, deputy communications director of Mr. Biden’s presidential campaign, said Facebook’s new policy was not meant “to fix the very real problem of disinformation that is undermining faith in our electoral process, but is instead an illusion of progress.”
“Banning deepfakes should be an incredibly low floor in combating disinformation,” Mr. Russo said.
The company’s new policy was first reported by The Washington Post.
Computer scientists have long warned that new techniques used by machines to generate images and sounds that are indistinguishable from the real thing can vastly increase the volume of false and misleading information online.
Deepfakes have become much more prevalent in recent months, especially on social media. And they have already begun challenging the public’s assumptions about what is real and what is not.
Last year, for instance, a Facebook video released by the government of Gabon, a country in central Africa, was meant to show proof of life for its president, who was out of the country for medical care. But the president’s critics claimed it was fake.
In December 2017, the technology site Motherboard reported that people were using A.I. technology to graft the heads of celebrities onto nude bodies in pornographic videos. Websites like Pornhub, Twitter and Reddit suppressed the videos, but according to the research firm Deeptrace Labs, these videos still made up 96 percent of deepfakes found in the last year.
Tech companies are researching new techniques to detect deepfake videos and stop their spread on social media, even as the technology to create them quickly evolves. Last year, Facebook participated in a “Deepfake Detection Challenge” and, along with other tech firms like Google and Microsoft, offered a bounty for outside researchers who develop the best tools and techniques to identify A.I.-generated deepfake videos.
Because Facebook is the No. 1 platform for sharing false political stories, according to disinformation researchers, it has an added urgency to spot and halt novel forms of digital manipulation. Renée DiResta, the technical research manager for the Stanford Internet Observatory, which studies disinformation, pointed out that a challenge of the policy was that the deepfake content “is likely to have already gone viral prior to any takedown or fact check.”
On Wednesday, Ms. Bickert, Facebook’s vice president of global policy management, is expected join other experts to testify on “manipulation and deception in the digital age” before the House Energy and Commerce Committee.
Ms. DiResta urged lawmakers to “delve into the specifics around how quickly the company envisions it could detect or respond to a viral deepfake, or to the ‘shallowfakes’ material which it won’t take down but has committed to fact-checking.”
Subbarao Kambhampati, a professor of computer science at Arizona State University, described Facebook’s effort to detect deepfakes as “a moving target.” He said that Facebook’s automated systems for detecting such videos would have limited reach, and that there would be “significant incentive” for people to develop fakes that fool Facebook’s systems.
There are many ways to manipulate videos with the help of artificial intelligence, added Matthias Niessner, a professor of computer science at the Technical University of Munich, who works with Google on its deepfake research. There are deepfake videos in which faces are swapped, for instance, or in which a person’s expression and lip movement are altered, he said.
“The question is where you draw the line,” Mr. Niessner said. “Eventually, it raises the question of intent and semantics.”

0 comments:

Post a Comment