A Blog by Jonathan Low

 

Mar 28, 2022

Cognitive Science Explains Why Propaganda Works

The truth is often complicated. False information can be framed and communicated in ways that make it engaging and easy to be believed. Repetition reinforces simple messages. 

And cognitive science reveals that the design of the way the human brain works makes it highly susceptible to those messages. JL 

David Epstein reports in Range Widely, image Wikimedia Commons:

The illusory truth effect is that repetition of statements leads to familiarity and also to feelings of truth. One of the big takeaways from research on misinformation is that given the way our brains work, we're all vulnerable to these effects. With false information you can make it really engaging, catchy, easy to believe. The truth is often complicated,nuanced and much more complex. So it can be hard to come up with easy ways of describing complicated information in a way that makes it as easy to believe as  false information.

This coda to a New York Times piece about the war in Ukraine caught my eye:

Mr. Kucher, the former independent TV host, said he was taken aback at how often the Kremlin talking points about fighting Nazis in Ukraine were echoed back to him in telephone conversations with former classmates.
“I was so stunned,” Mr. Kucher said. “I never would have thought that propaganda would have such an effect on people.”

Your Twitter timeline may be filled with profile pics featuring azure and yellow flags, showing support for Ukraine, but (to borrow a Washington Post headline) Putin is probably still “winning the information war that counts” — the one at home and in China.

Some of the Russian propaganda seems so transparent that it’s hard to believe it’s effective. But propaganda works. To learn a bit about why it works, and how to combat misinformation in general, I called Lisa Fazio, a Vanderbilt psychologist who studies misinformation. Below is an edited version of our conversation, which had some concrete takeaways for how I plan to think about and behave on social media.

This post is longer than usual (more like a 10-minute read than the usual 5-minute read) because I thought the conversation was interesting and important. I broke the chat up into sections, in case you want to pick one that seems interesting. The section in which Lisa says what she’d do as emperor of anti-misinformation is the last one. Let’s begin:

The “Illusory Truth Effect” — Repetition, Repetition, Repetition... Repetition...

David Epstein: Putin has been repeating this idea that he wants to “De-Nazify Ukraine,” which sounds random and ridiculous to a lot of the rest of the world. But apparently state-controlled media has been building up that idea in Russia for years now. As Novaya Gazeta, one of the last independent outlets in Russia put it: “Russian television never tires of reminding about the Nazis.” I think this might be related to the “illusory truth effect” you’ve studied. Can you explain that?

Lisa Fazio: This is a term we use for the finding that when you hear something multiple times, you're more likely to believe that it's true. So, for example, in studies, say that you know that the short, pleated skirt that men wear in Scotland is called a “kilt,” but then you see something that says it’s a “sari.” You’re likely to think that’s definitely false. If you see it twice, most people still think it's false, but they give it a slightly higher likelihood of being true. The illusory truth effect is simply that repetition of these statements leads to familiarity and also to this feeling of truth.

DE: Is it possible those people just didn’t know the correct answer to begin with?

LF: We’ve studied that, and this is true even for people who answered the question correctly two weeks earlier. When you present the false statement twice, they’re still more likely to think that it’s true.

DE: This reminds me of a phrase I saw in your work: “knowledge neglect.” So these people know the right answer, are they just not thinking about the knowledge they actually have?

LF: Exactly. You can think of two main ways that we could determine the truth of the statement. One would be to actually consult our knowledge base — to think about everything else we know about the topic. And the other would just be to use this quick heuristic or "gut level" feeling of “Does this feel true?” And it's that kind of quick "gut level" feeling that's affected by things like repetition.

DE: So if you can get people to slow down and check with their prior knowledge, does that help?

LF: It does seem to help. We've done studies where we get people to pause and tell us how they know that the statement is true or false. And when people do that, they seem to be less likely to rely on repetition.

DE: Does this hold for more impactful statements than suggesting that a kilt is actually a sari?

LF: We tried some really bizarre, health-related claims that are false, like that women retain DNA from every man they’ve ever slept with. And with those, people were more likely to slow down and consider their existing knowledge. But it’s complicated, and plausibility doesn’t necessarily matter. So crazy statements like the Earth is a perfect square, or smoking prevents lung cancer — we still see some increase in how likely people are to think those are true when they’re repeated. It’s a smaller effect, but it’s still there.

DE: How do you measure the effect size?

LF: We have people rate the statement on a scale from “definitely false” to “definitely true,” so you see less movement with outlandish statements, but there’s still an effect.

DE: So if repeating something twice makes a difference, does repeating it 30 times make a massive difference, or does the effect diminish?

LF: We just published a study where we actually texted people different trivia statements. So they were just going about their daily lives and we would text them some statement. Later, when we asked them to rate the truth of these statements, some were new to them, some they may have seen two, four, eight, sixteen times. And we got this pretty logarithmic curve — those initial repetitions cause a larger increase on truth rating than do later repetitions. But it’s still going up from eight to sixteen repetitions.

DE: Does this hold for various levels of cognitive ability?

LF: We’ve seen the illusory truth effect from five-year-olds to Vanderbilt undergrads, and other adults. I think that's one of the big takeaways from all of the research we've done on misinformation is that we all like to believe that this is something that only happens to other people. But, in reality, just given the way our brains work, we're all vulnerable to these effects.

DE: Right, I get that. It applies to pretty much everyone. But, I don’t know, maybe not to me. I mean, this is me we’re talking about.

LF: Exactly.

“Information Deficit Model,” and Effective Debunking — i.e. "Truth Sandwich"

DE: In your review paper, you and colleagues suggest that the traditional “information deficit model” — the idea that people just don’t have enough information, and will accept the truth if they get more info — isn’t really adequate. The idea that you just provide correct facts, and that’ll fix everything, isn’t borne out by the research. So what might effective debunking look like?

LF: One idea is what we call a “truth sandwich.” Facts are useful, but not enough to actually fix the issue. You have to address the false information directly. So in a truth sandwich, you start with true information, then discuss the false information and why it’s wrong — and who might have motivation for spreading it — and come back to the true information. It’s especially useful when people are deliberately misinforming the public. So for someone who has a false belief about climate change, if you can pull back the curtain and say, “No, actually this is a narrative that's been pushed by these oil companies with these motivations for having you believe this. Here's why it's wrong and here is what's actually true."

DE: So if you just tell someone that something isn’t true, but don’t replace it with truth, does that not work because it just leaves an information vacuum? Like, you have to be Indiana Jones where he replaces the idol with a bag of sand.

LF: Exactly. People have already created this causal story in their mind of how something happened. So in a lot of the experiments, there’s a story about how a warehouse fire happens. And initially people are provided with some evidence that it was arson — there were gas cans found on the scene of the crime. And then in one case you just tell people, "Oh, oops, sorry, that was wrong. There were no gas cans found there." Versus in another you give them an alternative story to replace it — that there weren't any gas cans at all; instead, it turns out that there was a faulty electrical switch that caused the fire. If you only tell people the gas cans weren't there, they still think it's arson. They just are like, "Oh, yeah. The gas cans weren't there, but it was still arson, of course." Whereas in the second story, they'll actually revise the story they had in mind and now remember it was actually accidental.

DE: So even when the evidence they were basing their judgment on, they’re told that's gone, but the judgment stays behind anyway, unless replaced?

LF: Exactly.

DE: Fascinating. That also seems like a difficult challenge because what if you just don’t know what caused the fire?

LF: Yeah, and with false information you can make it really engaging, really catchy, really easy to believe. And the truth is often complicated and nuanced and much more complex. So it can be really hard to come up with easy ways of describing complicated information in a way that makes it as easy to believe as the false information.

DE: And given the importance of repetition, does that mean you have to attempt to match disinformation repetition with debunking repetition?

LF: Yes. Unfortunately, memories fade, and current evidence is that debunkings fall into the same category as everything else. A week later, or a couple of weeks later, you’ve forgotten it. You might also forget the false information too, but if you keep seeing it again, then a one-time correction isn’t doing too much for you. A good example was the situation with the Sharpies in Arizona. If one time you read a debunking that it wasn’t actually evidence of election fraud, but then later you see 20 posts talking about it being fraud, that one correction doesn’t have much of a chance.

What Lisa Fazio Would Do Tomorrow As Emperor of Anti-Disinformation

DE: Ok so some of this is a little depressing. False information is readily available, catchy, and you just have to repeat it a lot, and make it hard to replace. You’re reminding me of a line I just read in a New York Review of Books piece in which author Emily Witt describes the internet as “structurally engineered to shove a bouquet of the dumbest arguments in human history in our faces several times a day.” So is there any hope for us?

LF: Yes, I think there is hope, but I do think it requires revamping some of the ways that information is currently spread. Right now, there are so many ways that the advantage is with people pushing disinformation. And there's not one simple fix to it, but I think it's the case where if we did a bunch of small fixes, we'd be in a much better place. So one simple thing social media companies can do is provide more scrutiny of larger accounts, and accounts with more followers. Around the 2020 election, the Election Integrity Partnership found that a small number of accounts were spreading most of the disinformation about the U.S. election.

DE: Well, this newsletter is published on Bulletin, a new platform from Facebook’s parent company, Meta, and I happen to know that some people who work there read this newsletter. So if I can put you on the spot: If you were named emperor of anti-disinformation tomorrow, what would be your first move?

LF: Ooh, I’ll give two. One is the bigger focus on large accounts. Right now it’s easier to get banned as a small account than a large one because it isn’t controversial. The idea should be that greater platforms come with greater responsibility. And if I’m YouTube or Twitter or Facebook, and have someone that I’m broadcasting to millions of people, I think I want to have more say on whether the information they share is correct. And I am concerned with private businesses making decisions about what’s true and false and all of that, but they’re already making decisions about who to amplify and who not to amplify. I think it’s already happening, so they should use that power.

DE: You and your colleagues wrote that “freedom of speech does not include the right to amplification of speech.”

LF: And then a second, simpler thing that I think we could fix tomorrow, if people would pay attention to it and implement it, is adding some metadata to photos and videos to prevent some of these cheap fakes. So there's a set of photos that we know will be used anytime there's a new conflict, as evidence of missile strikes. And having that set up so that it's easy to identify — that this is reused, this is a photo that someone took four years ago — would prevent a lot of easy-to-create misinformation we're seeing right now.

DE: Wait, there’s like a usual suspects of misinformation images?

LF: Yeah, the classic one is anytime we get a hurricane, there's a photoshopped image of a shark swimming on a flooded highway. That one goes around anytime there's a storm. And meteorologists have picked up on this — they know it's coming. And yet it goes viral every time.

DE: Well, maybe every storm washes a large shark onto the highway, ever think of that??

LF: Haha, unfortunately someone photoshopped it years ago.

DE: Right. So, in both cases there’s an easy-targeting advantage here. A small number of large accounts and a small number of known images that do a lot of the dirty work. So that at least bodes well for theoretical fixes. Thanks so much, Lisa, for sharing insights that are actionable for individuals, and institutions. I’m less depressed than I was a few minutes ago!

0 comments:

Post a Comment