A Blog by Jonathan Low

 

Mar 22, 2022

The Best Metric For Measuring Misinformation Is Harm

And there is now plenty of data to analyze. JL 

Tom Siegel reports in Fortune, image American Psychological Association:

Instead of attempting to identify whether something is true or not, which inevitably leads to debate, the real measure is the severity of harm it can cause to a person times the number of people affected. Velocity of information dispersion means enforcement systems can't intervene on time. Technology can help us understand how we collectively view a claim by analyzing what people on the internet think about it, what authoritative sources say about it, and the history of how it spread. This understanding, with an emphasis on the harm caused, can be a powerful way forward to clarify the murky misinformation landscape

It’s easy to think that everyone knows what “misinformation” means: “False information that is spread, regardless of whether there is intent to mislead.” It’s also easy to underestimate its importance even though it guides peoples’ choices, behaviors, and actions.

 

When it comes to misinformation, we need a better yardstick–and that’s harm. Instead of attempting to identify whether something is true or not, which inevitably leads to debate, the real measure is the severity of harm it can cause to a person times the number of people affected. The most potentially harmful misinformation matters much less if it's not seen by many people.

For starters, it helps to look at why there’s so much talk about misinformation. Misinformation, meaning harmful false information, is everywhere. Social media makes it easier to distribute it quickly around the globe at virtually no cost. Virality causes more harm because it affects so many people.

Sensational and shocking content tends to garner more attention and gets shared more by people in general. This puts misinformation at an advantage over the often less sensational truth that gets pushed aside. The resulting velocity of information dispersion means enforcement systems can't intervene on time.

The illusion of truth

In the old days, there was maybe one person in a village who believed in conspiracy theories. Other sympathizers likely lived far away, limiting everyone’s ability to know each other or be in contact. People were more isolated and unable to spread the message.

 

With social media, it's easier to find like-minded people. They can hype each other up. It creates the illusion that these are much bigger communities of like-minded people, all with the feeling that they are involved in the same movement. Those conversations often exist in bubbles where no discourse over merit happens, and alternative viewpoints don’t get room or attention. Echo chambers caused by algorithmic amplification and filter bubbles enhance the effect and push people's thinking to the extreme.

There are no substantive consequences for people who spread misinformation. There's no reputational harm, no real penalty–and almost no cost for spreading misinformation. This makes it harder and harder for people to sort truths from falsehoods and establish a sound basis for their judgments and decisions. It also makes it critical for social media platforms to figure out ways to identify, measure, and isolate misinformation. 

Who’s the arbiter?

Who gets to decide what is misinformation and what is not? Telling truth from fiction is often hard and depends on the viewpoint of the observer. Conclusive facts are not always available. It’s important to understand the various viewpoints and credibility of the sources, but who has time for that?

What constitutes misinformation is in almost all instances extremely nuanced–but it gets quickly politicized or rolled into policy decisions, beliefs, and actions without a balanced assessment and thoughtful consideration of the claim.

At the beginning of the coronavirus pandemic, there were dissenting opinions about whether the virus was spread via surface contact or whether it was airborne. The World Health Organization advised that handwashing, not masks, was the primary means of prevention. Minority voices in the medical community argued that the virus was airborne.  We now know that stifling those dissenting voices could have increased contagion.

This is just one example of how difficult it is to read the misinformation barometer, and how quickly people can jump to ill-advised conclusions.

Technology can predict harm

Technology can help us understand how we collectively view a claim by analyzing what people on the internet think about it, what authoritative sources say about it, and the history of how it spread.

This understanding, in combination with an emphasis on the harm caused, can be a powerful way forward to clarify the murky misinformation landscape, enact positive change, and make the internet a safer place for all of us.

Social media and messaging platforms can’t do it alone, but with the right expertise and technology, they are in the driver’s seat to identify the potential for harm, isolate it, and act to prevent it–but only if they:

  • Prioritize misinformation detection based on the harm it causes.
  • Measure the extent of the problem with reliable data science grade metrics and have an honest conversation about where approaches to keep users safe fall short.
  • Leverage data and signals from around the web and many sources to know what content to trust. Fact-checkers are too few, too slow and sometimes too biased to solve it alone.
  • Empower users with more knowledge to decide for themselves, while maintaining easy access.
  • Monitor and manage the harmful effects of algorithmic amplification caused by AI powered recommendations and info feeds.

Social messaging companies and other responsible stewards of information must do more and can do better. They should partner with third parties and use all available information to make well-informed decisions in identifying and helping prevent misinformation—and, better yet, start mitigating harm.

0 comments:

Post a Comment