A Blog by Jonathan Low

 

Sep 11, 2019

As 2020 Election Nears, Twitter Bots Getting Better At Seeming Human

They are learning and co-evolving. JL

Patrick Kulp reports in Ad Week:

The posting habits of  bots, which overwhelmingly support Donald Trump, grew more similar to their human counterparts as bots learned to rein in high-volume spam, better engage with replies and conduct polls."There is an arms race between bots and detection algorithms.” In 2016, a bot was more likely to share identical text multiples times, while in 2018, the text was tweeted one time by multiple accounts. "Distributed activity of bots can elicit the illusion of a consensus and, possibly, to avoid detection." Bots tended to reply to other posts with sentiment (and) shifted sharing from retweeting to promoting Twitter polls of issues or candidates.
Politics-focused Twitter bots that warped news coverage and online discussions of the 2016 election have only grown more sophisticated in the few years since, according to a new study from computer scientists at the University of Southern California.
The researchers first identified about 250,000 Twitter accounts that posted political keywords during both the 2016 general election and 2018 midterm elections. Of those accounts that had been tweeting during both election cycles, 30,000 were determined to be probable bot accounts.
The USC team found that the posting habits of these 30,000 bots —which overwhelmingly supported Donald Trump—grew more similar to those of their human counterparts during the latter election as bots learned to rein in high-volume spam, better engage with replies and even conduct polls.
“Our study further corroborates this idea that there is an arms race between bots and detection algorithms,” said USC computer science professor Emilio Ferrara, the lead author on the study. “We need to devote more efforts to understand how bots evolve and how more sophisticated ones can be detected. With the upcoming 2020 U.S. elections, the integrity of social media discourse is of paramount importance to allow a democratic process free of external influences.”
The threat of social media bots manipulating public discourse with misinformation first gained mainstream recognition in the wake of the 2016 election, when various reports revealed the extent of efforts by the Russian government and other organizations to shape its outcome. But at the time, strict-definition bots—as opposed to paid social media trolls à la Russian’s infamous Internet Research Agency—remained fairly rudimentary, pumping out torrents of repetitive text that better served to elevate the prominence of certain topics or links than actually engage with humans.
That’s been changing in the years since, according to USC’s analysis. Whereas the difference in tweet frequency between bots and humans was stark in 2016, the gap had narrowed in 2018. So too had bots reduced their propensity for excessive retweeting.
Bots have also shifted how they coordinate messages; in 2016, a bot was more likely to share identical text multiples times, while in 2018, thetext was more likely to be tweeted one time by multiple accounts.
“We hypothesize that the distributed activity of bots can be a strategy to elicit the illusion of a consensus and, possibly, to avoid detection,” the authors write.
The bots also tended to reply to other posts with more positive or negative sentiment and less neutral language in 2018 versus 2016, although human users did too, to a lesser extent.
Automated accounts also shifted their sharing tactics from retweeting pro-Trump or anti-Clinton posts—as they largely had in 2016—to focusing on promoting Twitter polls of issues or candidates. For example, the accounts shared other users’ polls with questions like “RT if you agree: We need ICE agents at every polling station during elections” or “Should voters in federal elections be required to show ID at the polls?”
The authors theorized that this behavior was an attempt to influence perception by gaming the number of respondents, such as when the shared polls included the phrase “Please vote and retweet for bigger sample size.” They also found evidence of human-bot coordination, wherein a human user might create a poll, then rely on bots for promotion.
“Although poll-tweets seem harmless and aimed only at surveying human opinion, their turnout might impact the human perception on the polled issues,” the researchers wrote.
The authors used a machine learning tool from Indiana University called Botometer to assess the probability that a given user was a bot. The program generates a likelihood score that accounts for characteristics like follower bases, tweet frequency, language and sentiment. They also used a straightforward set of election keywords to separate bots of a political nature, including terms like “election” or “debate,” candidate names, and popular hashtags like “#NeverTrump” and “#ImWithHer.”

0 comments:

Post a Comment