A Blog by Jonathan Low


Mar 5, 2020

If Tech Firms Can Manage COVID-19 Misinformation, Why Not Other Types?

Their priority is avoiding angering political leaders and their followers while supporting their engagement-oriented business model.

And concerted attempts to politicize the coronavirus pandemic may yet cause tech to back off editing false claims and fraudulent products. JL

Craig Timberg and colleagues report in the Washington Post:

While tech companies often express reluctance to act as “arbiters of truth,” their response to the coronavirus outbreak makes clear they are willing to arbitrate some truths — so long as they are seen as uncontroversial and not politically charged. “This is the first social media pandemic. A social media environment is not designed to produce accurate claims. It’s designed to keep you on the site.”
As misinformation about the coronavirus has spread online, YouTube has steered its viewers to credible news reports. Facebook has swept away some posts about phony cures. And Amazon has removed 1 million products related to dubious health claims.
These efforts have drawn praise from misinformation experts, who long have complained that tech companies should do more to confront misleading claims about other subjects, such as the Holocaust and fake cancer cures.
But this praise has come with a caveat: If tech companies can move to promote truth on a fast-moving public-health crisis, why do they struggle to do the same on other important issues?

Such questions have been particularly pointed when it comes to how technology companies handle misleading political claims, which Facebook and other social media companies have been wary of policing, even when information is demonstrably false. While tech companies often express reluctance to act as “arbiters of truth,” their response to the coronavirus outbreak makes clear they are willing to arbitrate some truths — so long as they are seen as uncontroversial and not politically charged.
“This is what it looks like when they really decide to take a stand and do something,” said Danny Rogers, co-founder of the Global Disinformation Index, a research group. “They haven’t had the policy will to act [on political misinformation]. Once they act, they can clearly be a force for good.”
Despite the work of tech companies, falsehoods about the new coronavirus have burgeoned along with the outbreak itself, drawing together long-running conspiracy narratives about the contrails of jetliners, the rollout of 5G wireless technology and the supposed development of secret bioweapons.
Much of this has been accompanied by ads making unproved claims about the value of nutritional supplements, protective masks or supposed medicines in combating the outbreak. Other claims have appeared online with ads for survivalist gear and durable packaged food, or pitches for investors to stock up on gold and other precious metals as a hedge against a potentially devalued dollar.

“This is the first social media pandemic, if you will,” said Carl Bergstrom, a University of Washington biology professor who studies disinformation. “A social media environment is not designed to produce accurate claims. It’s designed to keep you on the site.”
The surge of news coverage in recent days also has sparked complaints from President Trump’s political allies that fears about the new coronavirus are overblown and being intentionally hyped by his opponents — adding a dose of unproved political spin to what previously had been nonpolitical public conversation about the outbreak.

While technology companies permit many dubious claims to appear on their platforms, they have sought to check the spread of particularly egregious misinformation about the outbreak — such as claims that a certain medicine or nutritional supplement can cure coronavirus — and to direct users to authoritative information sources.

This has been especially visible on YouTube, which for years has been criticized for its role in the spread of conspiracy theories, hateful ideologies and propaganda. Major reforms announced last year, however, curbed the platform’s propensity to direct users to such content through its recommendation algorithm, said longtime critic Guillaume Chaslot, a former YouTube engineer who founded the watchdog group AlgoTransparency.
The result is that searches for “coronavirus” prompt lists of mainstream news reports, and YouTube’s autoplay is offering videos from authoritative sources — and not conspiracy theories about the outbreak — according to tracking by AlgoTransparency.
Chaslot said that YouTube still hosts numerous videos featuring conspiracy theories, including about the coronavirus, but that they are less likely to be recommended to users.

“They’ve made a lot of improvements, but it’s still far from being enough,” Chaslot said.
YouTube videos about coronavirus also carry a label below saying, “Get the latest information from the World Health Organization about coronavirus.” The words link to the WHO’s information page on the outbreak.
“We have clear policies that prohibit videos promoting medically unsubstantiated methods to prevent the coronavirus in place of seeking medical treatment, and we quickly remove videos violating these policies when flagged to us,” said Farshad Shadloo, a YouTube spokesman.
Facebook users who search for “coronavirus” receive suggestions that they visit the Centers for Disease Control and Prevention site to “help you stay healthy and help prevent the spread of the virus,” with a link to a CDC Web page on the outbreak. Twitter takes a similar approach, with a box titled “Know the facts.”

Reddit imposed a “quarantine” on a coronavirus group that hosted misinformation — warning users before they enter — and, early in the outbreak, posted a banner to direct users to “r/AskScience,” a community featuring accurate answers to medical questions, including about the coronavirus.
Facebook, meanwhile, announced that it would take down posts that promoted bogus cures for the coronavirus, going a step beyond its policy for addressing alleged cures to most other diseases, such as cancer and AIDS, which are permitted on the site.
The responses from the tech companies build on efforts for battling misinformation about vaccines. Facebook’s information box that pops up when users search for “coronavirus” — saying “Looking for coronavirus info?” — is almost identical to one that pops up for users searching for “vaccine.” (Both lead to CDC information pages.)

Claire Wardle, the U.S. director of First Draft, a nonprofit group that combats misinformation, said the companies have been more aggressive on targeting false claims about the coronavirus because the outbreak has largely lacked political controversy.
The tech companies also are able to direct users searching for information to widely acknowledged international authorities, such as the WHO and the CDC. There are no equivalents for many political claims in today’s polarized world, where nearly every utterance has its detractors and defenders — and companies that appear to favor one part of the ideological spectrum often come under attack from the other side.
“It’s this question of consensus,” Wardell said. “There aren’t two sides to health stuff.”
Facebook has come under the most sustained criticism for its handling of political misinformation, especially since the company announced in September that it would not subject statements or ads by politicians to the scrutiny of its independent fact-checking teams.

Chief executive Mark Zuckerberg later outlined his logic in a speech at Georgetown University in which he argued that democracies are best served by giving political speech an especially wide berth so that truths can emerge through robust debate.
But Democrats, backed by numerous misinformation researchers, have criticized the decision as serving Facebook’s business priorities by encouraging engagement on the platform and avoiding alienating politically powerful constituencies. This includes followers of President Trump, whose remarks have been riddled with well-documented falsehoods, exaggerations and misstatements.
“Facebook is committed to supporting the global health community’s work by limiting the spread of coronavirus misinformation and connecting people to authoritative and helpful information about the virus and its prevention,” said company spokesman Andy Stone.

Amazon said it has blocked or removed 1 million products sold by third-party merchants on its retail site for suspect or misleading claims about the coronavirus. The company warned at least one seller of surgical face masks that it would remove its listing for making false medical claims, according to a CNBC report. (The Washington Post is owned by Amazon chief executive Jeff Bezos.)
“Amazon has always required sellers to provide accurate information on product detail pages and we remove those that violate our policies,” said company spokeswoman Cecilia Fan.
To counter misinformation, Amazon this month began showing shoppers who search for the term “coronavirus” a message at the top of the results to “Learn more about Coronavirus protective measures” that links to the CDC website. Typically, that spot in Amazon’s search results features advertiser-sponsored products.
Amazon also no longer allows merchants to bid on “coronavirus” as a keyword in search results, as well as related words such as “covid-19,” the illness caused by the coronavirus.
“They are always going to try to avoid controversy when it comes to advertising,” said John Ghiorso, the chief executive of the Amazon-focused ad agency and consulting firm Orca Pacific.
Still, some sponsored links have slipped through. Three survival books showed up as sponsored links this week on searches for “covid19,” as did disinfectant products on searches for “kill coronavirus.”


Post a Comment