A Blog by Jonathan Low

 

Sep 19, 2019

Why AI Can't Save Us From Deep Fakes

Because of increasingly ease with which they can be crafted and the speed with which they can be disseminated, deep fakes may overpower purely technological solutions.

A combination of social, legal and technical fixes will have to be employed to address the threat. JL  

Zoe Schiffer reports in The Verge:

Deepfakes are unlikely to be fixed by technology alone. (And) deepfakes can’t be solved just through the courts. Almost all solutions fight manipulation at the point-of-capture or at the detection level. Allowing people to manipulate videos and images using machine learning, with results that are almost impossible to detect with the human eye can go viral on social media in a matter of seconds. AI could actually make things worse by concentrating more data and power in the hands of private corporations. “Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life.”
A new report from Data and Society raises doubts about automated solutions to deceptively altered videos, including machine learning-altered videos called deepfakes. Authors Britt Paris and Joan Donovan argue that deepfakes, while new, are part of a long history of media manipulation — one that requires both a social and a technical fix. Relying on AI could actually make things worse by concentrating more data and power in the hands of private corporations.
“The panic around deepfakes justifies quick technical solutions that don’t address structural inequality,” says Paris. “It’s a massive project, but we need to find solutions that are social as well as political so people without power aren’t left out of the equation.”
As Paris and Donovan see it, deepfakes are unlikely to be fixed by technology alone. “The relationship between media and truth has never been stable,” the report reads. In the 1850s when judges began allowing photographic evidence in court, people mistrusted the new technology and preferred witness testimony and written records. By the 1990s, media companies were complicit in misrepresenting events by selectively editing out images from evening broadcasts. In the Gulf War, reporters constructed a conflict between evenly matched opponents by failing to show the starkly uneven death toll between US and Iraqi forces. “These images were real images,” the report says. “What was manipulative was how they were contextualized, interpreted, and broadcast around the clock on cable television.”
Today, deepfakes have taken manipulation even further by allowing people to manipulate videos and images using machine learning, with results that are almost impossible to detect with the human eye. Now, the report says, “anyone with a public social media profile is fair game to be faked.” Once the fakes exist, they can go viral on social media in a matter of seconds.
To combat this issue, Facebook recently announced that it was releasing a dataset to allow people to test out new models aimed at detecting deepfakes. Startups like TruePic, which use AI and blockchain technology to detect manipulated photos, have started to gain momentum as well. More recently, the US Defense Advanced Research Projects Agency (DARPA) invested in Medifor, which looks at differences in video pixels to determine when something has been doctored. But almost all of these solutions aim to fight manipulation at the point-of-capture (i.e., when a photo or video is taken) or at the detection level (to make it easier to differentiate between doctored and undoctored content).
Legal experts see this approach as positive, noting the issues with deepfakes can’t be solved just through the courts. David Greene, civil liberties director at the Electronic Frontier Foundation, says faked videos can also have important uses in political commentary, parody, and anonymizing people who need identity protection.
“If there’s going to be a deepfakes law, it needs to account for free speech,” he added, noting that if people use deepfakes to do something illegal — like blackmail or extort someone, for example — they can be prosecuted under existing legislation.
But Paris worries AI-driven content filters and other technical fixes could cause real harm. “They make things better for some but could make things worse for others,” she says. “Designing new technical models creates openings for companies to capture all sorts of images and create a repository of online life.”
Bobby Chesney, co-author of the paper “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” does not view data collection as an issue. “I see the point that if left unregulated, private entities will have access to more information,” he said. “But the idea that that’s inherently bad strikes me as unpersuasive.”
Chesney and Paris agree that some sort of technical fix is needed and that it must work alongside the legal system to prosecute bad actors and stop the spread of faked videos. “We need to talk about mitigation and limiting harm, not solving this issue,” Chesney added. “Deepfakes aren’t going to disappear.”

0 comments:

Post a Comment