A Blog by Jonathan Low

 

Nov 1, 2020

Facebook And YouTube Have Fixed Some Things Since 2016. But Not Much

The fundamental problem remains that the very essence of their algorithms and business models reward the extreme, violent and hateful behavior they piously declare they are attempting to manage. JL

Will Oremus reports in OneZero:

Without changing the dominant platforms’ scale, the incentives created by their engagement-based algorithms, or the format of feeds that give equal weight to reliable and unreliable sources, the battle against online misinformation, conspiracies, and extremism will be uphill. By and large, however, Facebook and YouTube have resisted any suggestion that their misinformation problems may be endemic to their core products. And no wonder: For all the controversies, backlashes, fines, and regulatory posturing, they’re more profitable than ever.

In2016, everyone from Russian agents to British political consultants to Macedonian teenagers to randos in the American suburbs used social platforms with impunity to spread misinformation ahead of the U.S. presidential election. Much of it favored Trump, who pulled off a stunning upset victory.

Just how much influence those campaigns had on the outcome has never been established, and probably never will be. There is at least some research to suggest that Russian trolls, for instance, are not particularly effective at changing Americans’ political opinions. Still, it was widely agreed in the aftermath (though not by Trump) that the platforms had been far too easily exploited for the purpose of election interference, and that their dynamics had favored hoaxes and hyperpartisanship over reliable information sources.

On the eve of the next U.S. presidential election, it is fair to say that the platforms have come a long way in acknowledging and taking steps to address the most blatant of those abuses. What isn’t yet clear is how much of a difference those steps will make.

The Pattern

Social media companies are fully prepared for this moment… right?

  • It’s almost hard to believe now, but the last time the United States elected a president, Facebook, Twitter, and YouTube had essentially no policies on misinformation, conspiracy theories, or foreign election interference. Now, all the major platforms have such policies — but they are constantly evolving, inconsistently applied, and often hotly contested, both on the national stage and within the companies themselves. Together they amount to a convoluted patchwork in which the same post, ad, or user might be banned on one platform, slapped with a warning label on another, and fully permitted on a third.
  • When it comes to political speech, social platforms have gone from being essentially anarchies to something more like a Wild West. There are laws now, and there are sheriffs, but outlaws and banditry still abound. Perhaps the most effective changes, relatively speaking, have been the companies’ approaches to foreign interference campaigns. Facebook and Twitter in particular now proactively investigate and take down networks of accounts linked to state-sponsored influence operations, often in cooperation with federal authorities.
  • Domestic misinformation continues to flourish on social media, though the platforms have managed to limit specific categories of lies and hoaxes. Facebook is the only one to implement broad policies against false and misleading news stories, and it partners with fact-checkers to label them. But there remain gaping holes in that policy and its enforcement. Twitter still has no blanket policy against misinformation — perhaps understandably, given the nature of its platform — but has enacted rules against specific categories of false claims that it considers particularly harmful, such as Covid-19, voting, and the integrity of elections. (Facebook has also created more stringent rules against these varieties of misinformation.) Both companies have begun to apply some of their policies even to public figures, though they differ in the specifics of what they take down, what they label, and what they let stand. And they’re taking heavy flack, understandably, for what looks to critics like selective enforcement.
  • YouTube, meanwhile, has taken the most laissez-faire approach, seemingly content to let its rivals take the heat for failing to enforce their policies while implementing few of its own (until recently, at least). This despite YouTube being a major hub for conspiracy theories in particular. Even TikTok, allegedly such a threat to American democracy that Trump is still trying to ban it, has rolled out election misinformation policies that go farther than YouTube’s in some respects.
  • Speaking of conspiracy theories, the leading platforms have now cracked down on QAnon groups and accounts — but only after it gained a massive nationwide following and sowed widespread mistrust of democratic institutions, not to mention incidents of real-world violence. While Twitter took aggressive action starting in July, Facebook and YouTube waited until October, ensuring that QAnon would remain an influential force in the November election. Still, early research suggests the crackdowns have had a significant effect, belated though they may be.
  • Conspiracy theories pose a legitimately thorny problem for social platforms because enabling nontraditional information sources to challenge official narratives is one of their core value propositions, especially in authoritarian contexts. But the approach of isolating and banning particularly influential conspiracies that have been widely debunked by a free press and civil society, such as QAnon, seems to hold at least some promise, in the short term.
  • Political ads are one realm in which the platforms have made significant progress, both because they’re easier to control and because Congress forced their hand on disclosures and transparency measures. Twitter last year announced a full ban on political ads, including issue ads — a policy I critiqued at the time — while Facebook continued to allow political ads and plays a major role in campaigns’ digital strategies. Facebook did ban new political ads starting this week — a policy whose implementation came with glitches that mistakenly disallowed numerous preapproved ads. It has also announced a temporary ban on all political advertising beginning the day after the election, as a bulwark against efforts to fuel civil unrest about the election results. Here too, YouTube has been the most hands-off, which is why the Trump campaign is making it a centerpiece of its final ad push.
  • At least one expert with firsthand experience believes all the changes add up to a substantially healthier online information environment in 2020. Alex Stamos was Facebook’s chief security officer for the 2016 election and its aftermath, and left the company in 2018 amid reports that he was frustrated with its handling of the concerns he and his team had raised about foreign interference. He is now director of the Stanford Internet Observatory. “I think Facebook and Twitter have reacted well to 2016, especially to potential foreign influence,” Stamos told me. “They are trying to react to how to best respond to the fact that the majority of disinformation is domestic, and are bumping up against the limits of consensus of what their roles should be there.” He did not have the same praise for YouTube, which he said “still has the least comprehensive policies of the big three and is, by far, the least transparent.”
  • Still, the platforms’ misinformation problems are far from solved, he said. Some of the same issues that plagued Facebook in the 2016 U.S. election have migrated to smaller social networks, and many persist in a virulent form in countries around the world, including on Facebook. “Facebook in particular needs to figure out a more sustainable model that can be applied globally,” he said. Finally, tech companies will need to build on the targeted misinformation policies they developed for the 2020 U.S. election to tackle future risks, such as disinformation about Covid vaccines.
  • But even Stamos’ careful optimism about the platforms’ election preparedness isn’t shared by some of the other experts I’ve spoken with. Dipayan Ghosh, who has experience both as a technology policy adviser in the Obama administration and working on security and privacy at Facebook, is now director of the Platform Accountability Project at Harvard’s Kennedy School. “Four years on, it’s hard to substantiate a claim that the big tech companies are doing any better than they did in 2016,” Ghosh told me.
  • “On one hand, they are catching more policy-offending content including disinformation in absolute terms,” Ghosh went on. “On the other, the number of bad actors online has exploded — and we all knew that it would. As such, we’re still seeing high rates of disinformation, which may in the end have an electoral impact. Meanwhile, companies like Facebook have left the door open for politicians to openly lie and spread misinformation because of a false commitment to free speech — and some politicians are taking advantage. Overall, as such, we continue to see a noisy digital media environment that could well mislead many voters.
  • Ghosh’s comments hint at what I view as the hard nut at the core of the problem: the fundamental structure of social media. Over the past four years, the major social platforms have reluctantly acknowledged that they have a role to play in preventing blatant abuse and exploitation of their platform by obviously bad-faith actors, and they’ve taken real steps toward addressing that. Halting, often confusing, and in many ways unsatisfying steps, but real steps nonetheless. (Again, this is less true of YouTube, which has consistently done the minimum it can get away with and has been years behind Facebook and Twitter in taking responsibility for its platform.) But reining in the most obvious and clear-cut abuses does very little to change the overall impact of social media on political discourse.
  • Social media, in the broadest sense, has both democratized public speech (in the sense of giving a megaphone with potentially global reach to people who didn’t previously have one) and systematically amplified the speech that best plays to each individual user’s biases and emotions. There are tremendous upsides to this — the old media regime unjustly marginalized many voices, including those of minorities and social movements — and there are vast, scary downsides that we’re still just starting to reckon with. (Stamos, for his part, argues that these sorts of vague concerns about the societal downsides of algorithmic feeds are hard for platforms to address in the absence of clear, empirical evidence to back them.) Without changing the dominant platforms’ ungovernable scale, the incentives created by their engagement-based algorithms, or the format of feeds that give equal weight to reliable and unreliable sources, the battle against online misinformation, conspiracies, and extremism will be forever uphill.
  • To put it in tech company terms, political misinformation on social media isn’t just a policy and enforcement problem. It’s a product problem. Twitter has at least begun to acknowledge that, with its project to promote “healthy conversations” via product tweaks to disincentivize dunking, for instance. But any given five minutes using the app should be more than enough to see how far that effort has gotten it. And Facebook may be just beginning to look at product changes, as evidenced by its suspension of political group recommendations this week and Instagram’s temporary removal of its “recent” tab. By and large, however, Facebook and YouTube have resisted any suggestion that their misinformation problems may be endemic to their core products. And no wonder: For all the controversies, backlashes, fines, and regulatory posturing, they’re more profitable than ever.

1 comments:

Tucker Conrad said...

A GREAT SPELL CASTER (DR. EMU) THAT HELP ME BRING BACK MY EX GIRLFRIEND.
Am so happy to testify about a great spell caster that helped me when all hope was lost for me to unite with my ex-girlfriend that I love so much. I had a girlfriend that love me so much but something terrible happen to our relationship one afternoon when her friend that was always trying to get to me was trying to force me to make love to her just because she was been jealous of her friend that i was dating and on the scene my girlfriend just walk in and she thought we had something special doing together, i tried to explain things to her that her friend always do this whenever she is not with me and i always refuse her but i never told her because i did not want the both of them to be enemies to each other but she never believed me. She broke up with me and I tried times without numbers to make her believe me but she never believed me until one day i heard about the DR. EMU and I emailed him and he replied to me so kindly and helped me get back my lovely relationship that was already gone for two months.
Email him at: Emutemple@gmail.com  
Call or Whats-app him: +2347012841542

Post a Comment