A Blog by Jonathan Low

 

Sep 24, 2020

The Supply Of Disinformation Is Becoming Infinite

The issue will not be whether a specific article, video, photo or quote is real, but whether the source is trustworthy.

And as offputting as determining that may be, how will people know whether to trust the entity verifying the source? JL

Renee DiResta reports in The Atlantic:

AI-generated content will continue to become more sophisticated, and it will be increasingly difficult to differentiate it from the content that is created by humans … In the meantime, we’ll need to keep our guard up as we take in information, and learn to evaluate the trustworthiness of the sources we’re using. We will continue to have to figure out how to believe, and what to believe. (But) in a future where machines are increasingly creating our content, we’ll have to figure out how to trust.
Someday soon, the reading public will miss the days when a bit of detective work could identify completely fictitious authors. Consider the case of “Alice Donovan.” In 2016, a freelance writer by that name emailed the editors of CounterPunch, a left-leaning independent media site, to pitch a story. Her Twitter profile identified her as a journalist. Over a period of 18 months, Donovan pitched CounterPunch regularly; the publication accepted a handful of her pieces, and a collection of left-leaning sites accepted others.
Then, in 2018, the editor of CounterPunch received a phone call from The Washington Post. A reporter there had obtained an FBI report suggesting that Alice Donovan was a “persona account”—a fictitious figure—created by the Main Directorate, the Russian military-intelligence agency commonly known as the GU. Skeptical of the Russia link, but concerned about having potentially published content from a fake person, the CounterPunch editors pored over Donovan’s oeuvre, which spanned topics as varied as Syria, Black Lives Matter, and Hillary Clinton’s emails. They found her to be not only suspicious, but also a plagiarist: Some of the articles bearing her byline appeared to have been written instead by another woman, Sophia Mangal, a journalist affiliated with something called the Inside Syria Media Center.
The ISMC’s “About” page claimed that the group, ostensibly a cross between a think tank and a news outlet, was founded in 2015 by a team of journalists. But as the CounterPunch editors dug further, they realized that Sophia Mangal was also a fabrication. So, it seemed, were the others at ISMC whom they tried to track down. CounterPunch published a January 2018 postmortem detailing what its investigation had found: articles plagiarized from The New Yorker, the Saudi-based Arab News, and other sources; prolific “journalists” who filed as many as three or four stories a day, but whose bylines disappeared after inquiries were made to verify that they existed; social-media profiles that featured stolen photos of real people; lively Twitter accounts that sycophantically defended the Syrian dictator and Russian ally Bashar al-Assad. The ISMC, it seemed, was a front. Its employees were purely digital personas controlled by Russian-intelligence agents.
A year after CounterPunch filed its story, in mid-2019, the Senate Select Committee on Intelligence turned over a data set to my team at the Stanford Internet Observatory. Facebook had attributed the material to the GU; the official Facebook page for the ISMC (among other entities) was part of that trove. As we combed through online archives and obscure message boards to investigate the data, we found even more fake journalist personas with plagiarized portfolios and stolen photos, more front publications, and clusters of fake “amplifier” personas who shared the fake journalists’ content to audiences on Twitter, Reddit, and Facebook. All of this activity overlapped with the work of Russia’s other manipulation team, the Internet Research Agency. The IRA had its own accounts, which were producing even more tweets and Facebook commentary.
In other words, we found a sprawling web of nonexistent authors turning Russian-government talking points into thousands of opinion pieces and placing them in sympathetic Western publications, with crowds of fake people discussing the same themes on Twitter. Not all of these personas or stories were hits—in fact, very few of the ISMC’s articles achieved mass reach—but in the strange world of online manipulation, popularity isn’t the only goal. If fake op-eds circulate widely and change American minds about Syria or the upcoming election, that’s a success. If a proliferation of fake comments convinces the public that a majority feels some particular way about a hot topic, that’s a success. But even merely creating cynicism or confusion—about what is real and who is saying what—is a form of success too.
Because photos and text are easily searchable, cribbing real people’s photos and real writers’ work can make an operation easy to unravel. So America’s adversaries are adapting. Earlier this month, Facebook shut down yet another Russian influence operation, this one built around a website named PeaceData, which belonged to the IRA. This latest effort also involved a dubious media outlet as a front. But this time, instead of stealing photos, the trolls filled out fictitious authors’ social-media profiles with images of entirely unique faces generated by artificial intelligence. (Websites such as ThisPersonDoesNotExist.com show how realistic these faces can be.) And while they republished some stories from elsewhere, their new, more robust fake personas hired unwitting American journalists to write original ones. But even this approach left evidence behind; some of those journalists have since given interviews about their experiences, revealing operational details.
The ideal scenario for the modern propagandist, of course, is to have convincing personas produce original content. Generative text is the next frontier. Released in a beta version in June by the artificial-intelligence research lab OpenAI, a tool called GPT-3 generates long-form articles as effortlessly as it composes tweets, and its output is often difficult to distinguish from the work of human beings. In fact, it wrote parts of this article. Tools like this won’t just supercharge global propaganda operations; they will force internet platforms and average users alike to find new ways of deciding what and whom to trust.
When I prompted GPT-3 to opine on these issues, it captured the problem succinctly:
For the moment, at least, it seems unlikely that generative media will be effective in the same way as traditional media at promoting political messages. However, that’s not to say that it couldn’t be. What it will do is muddle the waters, making it much harder to tell what’s real and what’s not.
The letters in GPT-3 stand for “generative pre-trained transformer.” It works by taking text input  and predicting what comes next. The model was trained on several massive data sets, including Wikipedia and Common Crawl (a nonprofit dedicated to “providing a copy of the internet to internet researchers”). In generating text, GPT-3 may return facts or drop the names of relevant public figures. It can produce computer code, poems, journalistic-sounding articles that reference the real world, tweets in the style of a particular account, or long theoretical essays on par with what a middling freshman philosophy student might write.
Indeed, if you want GPT-3 to speculate about the implications of a future full of AI-generated writing, it will. I fed it a couple of paragraphs of an early draft of this essay, asked it to generate 200 more words on the same subject, and instructed it to be a little creative. Here, verbatim, is some of what it returned:
In addition to the potential for AI-generated false stories, there’s a simultaneously scary and exciting future where AI-generated false stories are the norm. The rise of the software engineer has given us the power to create new kinds of spaces: virtual reality and augmented reality are now possible, and the “Internet of things” is increasingly entering our homes. This past year, we’ve seen a new type of art: that which is created by algorithms and not humans. In this future, AI-generated content will continue to become more sophisticated, and it will be increasingly difficult to differentiate it from the content that is created by humans. One of the implications of the rise in AI-generated content is that the public will have to contend with the reality that it will be increasingly difficult to differentiate between generated content and human-generated content.
It’s somewhat disconcerting to have a machine plausibly imitating your writing style based on a few paragraphs—to see it mindlessly generating “thoughts” that you have had. Even though I knew GPT-3 was putting words together based on probabilities, I found myself giving it prompts when I was stuck on how best to express a concept or wondering how it would answer a question. The AI’s output tends to ramble a bit and can become somewhat nonsensical if it goes on for too long, but this is a problem humans face as well; it’s why freshman philosophy students have TAs and publications have editors. But given a prompt, GPT-3 can produce any number of unique takes, which a person can quickly and easily polish and post.
The Guardian, in fact, recently did just that: Editors had the AI write eight essays about being an AI. Then they stitched them together into what has been widely touted as the first machine-authored op-ed. The cheeky headline—“A robot wrote this entire article. Are you scared yet, human?”—raised the prospect that self-aware computers will create mischief. But the more pressing question is how humans will adapt to a technology that enables anyone with access to push out content, undetectably, quickly, and cheaply. With minimal effort, GPT-3 can be guided to write in a range of styles: In a recent study, the Middlebury Institute of International Studies researchers Kris McGuffie and Alex Newhouse found that it could be prompted to generate plausible pro-Nazi posts, reproduce the writing style of mass-shooter manifestos, and answer questions like a QAnon disciple. The developers of GPT-3 understand the potential for abuse and have limited the number of people with access, though hostile countries will likely develop copycat versions soon enough.
In the past, propaganda needed human hands to write it. Eager to create the illusion of popularity, authorities in China began hiring people in 2004 to flood online spaces with pro-government comments. By 2016, members of the “50-cent party”—after the amount that its members were said to be paid per post—were putting up an estimated 450 million social-media comments a year. Similar comment armies, troll factories, and fake-news shops in the Philippines, Poland, Russia, and elsewhere have attempted to manipulate public opinion by flooding online spaces with fake posts. One 2018 “opinion-rigging” operation in South Korea spearheaded by a popular blogger used a combination of human commenters as well as an automated program to post and boost comments critical of a particular politician. Seoul police noted the volume of two days of activity: “They manipulated about 20,000 comments on 675 news articles, using 2,290 different IDs from January 17 to 18.” In the quaint early days of social-media manipulation, such efforts were limited by human constraints. That will soon no longer be the case.
Writing tweets, comments, and entire articles for a fake media outlet is time consuming. The GU agents who ran “Alice Donovan” and the imaginary ISMC team got sloppy. They plagiarized others’ writing and recycled their own; the stolen profile photos cemented investigators’ conviction that they were fake. In many other influence operations, the need to produce high volumes of text content means that researchers regularly observe repetitive phrasing from manipulative accounts. Advances in AI-generated content will eliminate those tells. In time, operators far less sophisticated than the Russian government will have the ability to robo-generate fake tweets or op-eds. The consequences could be significant. In countries around the world, coordinated propaganda campaigns in print as well as social media have sown social unrest, pushed down vaccination rates, and even promoted ethnic violence. Now imagine what happens when the sources of such postings are untraceable and the supply is essentially infinite.
Our information ecosystem is trending toward unreality. Of course, society has managed to adapt to technology that alters humans’ perception of reality before: The introduction of Adobe Photoshop in 1990 popularized the ability to edit pictures of real people. Computer-generated images (CGI) offered another leap forward; artists and moviemakers use computers to design life forms and even entire worlds from whole cloth. Today, Snapchat and Instagram filters that put canine features on human faces have made selfie altering not just effortless but socially desirable, as well.
Whether these digital alterations alarm people depends on the context in which they’re experienced. By now, readers of celebrity or fashion magazines have come to assume that photos in them are digitally airbrushed. Movie viewers intent on being entertained do not feel misled by special effects. But in other domains, the discovery that a video or photo has been edited is a scandal—it’s manipulation. Americans who read an article in an online newspaper or a comment on an internet message board today might fairly assume that it’s written by a real person. That assumption won’t hold in the future, and this has significant implications for how we parse information and think about online identity.
“On the Internet,” declared a 1993 New Yorker cartoon, “nobody knows you’re a dog.” In authoritarian countries where the government routinely cranks out propaganda and manipulates the discourse with so-called sock-puppet accounts, the public reacts with weary resignation: Identifying what’s authentic, what’s true, often requires significant effort. The impact that pervasive unreality within the information space will have on liberal democracies is unclear. If, or when, the flooding of the discourse comes to pass, our trust in what we read and who we are speaking with online is likely to decrease. This has already begun to happen: As awareness of deepfake videos, automated trolling, and other manipulative tactics has increased, internet users have developed a new vocabulary with which to try to discredit their critics and ideological opponents. Some supporters of Donald Trump have speculated, against all evidence, that his comments on the infamous Access Hollywood tape were digitally generated. Twitter users regularly accuse each other of being bots.
The rise of generative text will deepen those suspicions and change the information environment in other ways. In the media, editors will find themselves exercising extra vigilance to avoid publishing synthesized op-eds by future algorithmic Alice Donovans and Sophia Mangals. Major internet companies will work to make detection of generated content as fast and effective as possible. Still, as the detection technology grows in sophistication, so too will tools that generate images, videos, and text even more seamlessly.
Amid the arms race surrounding AI-generated content, users and internet companies will give up on trying to judge authenticity tweet by tweet and article by article. Instead, the identity of the account attached to the comment, or person attached to the byline, will become a critical signal toward gauging legitimacy. Many users will want to know that what they’re reading or seeing is tied to a real person—not an AI-generated persona. Already, the internet has divided into more and less sanitized spaces: Sites such as 4chan exist for internet users who want unfettered anonymous forums, but the majority of internet users prefer more moderated platforms. The proliferation of machine-generated messaging will enhance the appeal of internet communities in which all participants must validate their identity, or at least their physical existence, in some way. Political debate may migrate to entirely new speech platforms—or carved-out sections of existing platforms such as Twitter or Facebook—that prioritize the postings of users with verified identities or validated pseudonyms.
Then again, these adjustments could put even more power in the hands of internet platforms that many Americans believe already have too much influence over how information circulates in the United States. When Twitter confers a blue check mark on a public figure’s profile, the company officially is saying only that it has verified who owns the account. But the user community often views those decisions as an endorsement. And when the company took check marks off the accounts of several far-right and white-supremacist leaders in 2017, it was, in some sense, establishing bounds for respectable debate.
The idea that a verified identity should be a precondition for contributing to public discourse is dystopian in its own way. Since the dawn of the nation, Americans have valued anonymous and pseudonymous speech: Alexander Hamilton, James Madison, and John Jay used the pen name Publius when they wrote the Federalist Papers, which laid out founding principles of American government. Whistleblowers and other insiders have published anonymous statements in the interest of informing the public. Figures as varied as the statistics guru Nate Silver (“Poblano”) and Senator Mitt Romney (“Pierre Delecto”) have used pseudonyms while discussing political matters on the internet. The goal shouldn’t be to end anonymity online, but merely to reserve the public square for people who exist—not for artificially intelligent propaganda generators.
Without even existing, Alice Donovan and Sophia Mangal could become a harbinger of the future. To quote GPT-3, the dilemma is this:
In this future, AI-generated content will continue to become more sophisticated, and it will be increasingly difficult to differentiate it from the content that is created by humans … In the meantime, we’ll need to keep our guard up as we take in information, and learn to evaluate the trustworthiness of the sources we’re using. We will continue to have to figure out how to believe, and what to believe. In a future where machines are increasingly creating our content, we’ll have to figure out how to trust.

0 comments:

Post a Comment