A Blog by Jonathan Low

 

May 16, 2020

Can the Social Sciences Help Humanity Navigate the Pandemic?

Someone should probably study that. JL

Cathleen O'Grady reports in ars technica:

If humans didn’t insist on being so messily human, pandemic response would be much simpler. People would stay physically separated; leaders would be proactive and responsive to evidence; our fight concentrated on the biomedical tools we need. The problem is that our imperfect humanity gets in the way, and getting around those imperfections demands that we understand human behavior. Are the social sciences ready to help us navigate the pandemic? Experts disagree. The coronavirus crisis forces a tough, society-wide lesson on scientific uncertainty.
In mid-March, just before President Trump declared COVID-19 a national emergency, Stanford psychology professor Robb Willer posted a call to arms on Twitter, asking for suggestions on how the social and behavioral sciences could help to address the pandemic. “What ideas might we have to recommend? What research could we do?” he asked. “All ideas, half-baked or otherwise, are welcome!”
Given the importance of our social interactions to the spread of the pandemic, behavioral sciences should have a lot to tell us. So Willer got a large response, and the result was a huge team effort coordinated by Willer and New York University social psychology professor Jay van Bavel. The goal: to sum up all the best and most relevant research from psychology, sociology, public health, and other social sciences. Published in the journal Nature Human Behaviour last week—a lightning-fast turnaround for academia—the resulting paper highlights research that addresses behavioral questions that have come up in the pandemic, from understanding cultural differences to minimizing scientific misinformation.
Different sections, each written by researchers with expertise in that particular field, summarize research on topics from social inequality to science communication and fake news. Responding to the crisis requires people to change their behavior, the paper’s authors argue, so we need to draw on behavioral research to “help align human behavior with the recommendations of epidemiologists and public health experts.”
But while Willer, van Bavel, and their colleagues were putting together their paper, another team of researchers put together their own, entirely opposite, call to arms: a plea, in the face of an avalanche of behavioral science research on COVID-19, for psychology researchers to have some humility. This paper—currently published online in draft format and seeding avid debates on social media—argues that much of psychological research is nowhere near the point of being ready to help in a crisis. Instead, it sketches out an “evidence readiness” framework to help people determine when the field will be.
So are the social sciences ready to help us navigate the pandemic? Evidently, experts disagree, and their scuffle is part of a broader debate about how much evidence we need before we act. The coronavirus crisis forces a tough, society-wide lesson on scientific uncertainty. And with such escalated stakes, how do we balance the potential harm of acting prematurely with the harm of not acting at all?
Humanity, being all complicated again at the Pennsylvania capital.
Enlarge / Humanity, being all complicated again at the Pennsylvania capital.
NICHOLAS KAMM/AFP via Getty Images

Leaning on the evidence

If humans didn’t insist on being quite so messily human, pandemic response would be much simpler. People would stay physically separated whenever possible; leaders would be proactive and responsive to evidence; our fight could be concentrated on the biomedical tools we so urgently need. The problem is that our maddening, imperfect humanity gets in the way at every turn, and getting around those imperfections demands that we understand the human behavior underlying them.
It's also clear that we need to understand the differences between groups of people to get a handle on the pandemic. Speculation has been rampant about how cultural differences might influence what sort of responses are palatable. And some groups are suffering disproportionately: death rates are higher among African-American and Latinx communities in the US, while a large analysis from the UK found that black, minority ethnic, and poorer people are at higher risk of death—our social inequalities, housing, transport, and food systems all play a role in shaping the crisis. We can’t extricate people and our complicated human behavior and society from the pandemic: they are one and the same.
In their paper, Van Bavel, Willer and their group of behavioral research proponents point to studies from fields like public health, sociology and psychology. They cover work on cultural differences, social inequality, mental health, and more, pulling out suggestions for how the research could be useful for policymakers and community leaders.
Those recommendations are pretty intuitive. For effective communications, it could be helpful to lean on sources that carry weight in different communities, like religious leaders, they suggest. And public health messaging that emphasizes protecting others—rather than fixating on just protecting oneself—tends to be persuasive, the proponents argue.
But not everyone is convinced that it would necessarily be a good idea to act on the recommendations. “Many of the topics surveyed are relevant,” write psychologist Hans IJzerman and a team of critics in their draft. The team's concern isn’t the relevance of the research; it’s how robust that research is. If there are critical flaws in the supporting data, then applying these lessons on a broad scale could be worse than useless—it could be actively harmful.
“I was pretty disappointed,” says Simine Vazire, a UC Davis psychology professor and one of the team of critics. In the introduction to their paper, van Bavel and the other proponents write that each section describes the quality of the evidence that it rests on. But there was nowhere near the level of evidence evaluation Vazire expected, she says. She points to a section on healthy mindsets, which suggests that with the right mindset, difficult experiences can lead to “stress-related growth“—and that mindsets can be changed with just short interventions.
“That literature is really flawed,” she says. “There are probably individuals who grow from stress, but it’s not the norm.” It’s an irresponsible thing to claim, she argues: “It could make people feel bad if they think most people grow from trauma and stress, and if they don’t—which is much, much more typical—that could add to their depression and anxiety.”
Sander van der Linden, a psychologist at Cambridge University and one of van Bavel’s co-authors, argues that the paper was cautious in its claims, taking care to phrase things using words that convey uncertainty, avoid direct prescriptions for policy, and point out where more research is needed. The paper is intended to function more as an opinion piece, he says, and less as a claim of what’s true and what’s false.

Taking things too far

Vazire and the other critics are part of a movement in psychology that is very publicly concerned about the shaky ground beneath a huge amount of published research in the field. Thanks to small sample sizes, small effects, and a lot of research looking exclusively at college students in Western societies, many important results in psychology haven't reappeared when experiments are repeated by other researchers.
There are a bunch of reasons why a result may not show up a second time around, including important but unnoticed differences in how the experiment is run or differences between subjects from disparate times or cultures. But one of the potential reasons is a particularly worrying one: the possibility that the finding wasn’t real in the first place.
Experiments use statistical tests to tease out the signal from the noise. When you have noisy data from small groups of complicated humans, running lots of those tests increases your chances of spotting a pattern that isn’t really there. That phantom pattern might only be recognized when other people try the same thing and find no result.
With a small enough set of noisy data, it's even possible that the initial finding is completely backward. The first experiment could suggest that an intervention makes something better, when in reality, it might actually make it worse; that initial positive result could just be a random fluke. Basing a large-scale pandemic policy on weak evidence like this could mean, at best, no effect—and at worst, outright harm.
There’s another important question that worries IJzerman and his team: whether psychology experiments can be generalized from one context to another—will the result from California undergrads show up when you run the same experiment with Indonesian farmers? That’s a crucial question in a pandemic, because research from a lab might not work the same way when implemented at a mass scale in the wild of a virus-ridden city. Conducting an experiment in a very specific group of people is fine for basic research, says social psychologist Farid Anvari, another of the critics. “But then to get it to a level where you can apply it in an applied setting, there’s a lot of work that needs to be done. Our concern is that this massive gap is just being stepped over as though it doesn’t exist.”
But van Bavel argues that the criticisms themselves are overly general. “Many studies in psychology are not ready,” he agrees. But it’s still worthwhile to pick the work that’s most likely to be generally valid and to replicate, he argues. “That's all this is: a snapshot of what we think is the best.” And many behavioral researchers—including van Bavel—are in the process of running huge international studies that aim to rapidly improve the evidence they can offer policymakers.
He points to the paper’s discussion of research on public health messaging in Sierra Leone during the Ebola crisis, as well as its section on how science communication can deal with the problem of fake news, as being particularly critical areas to pay attention to: “You can have all the world’s best science, but if you can’t convince the public to adopt those behaviors or trust it, then it’s going to be of little utility.”
A lot of literature in this.
Enlarge / A lot of literature in this.
Liyao Xie / Getty Images

Not rocket science

IJzerman and his team’s critique isn’t aimed just at van Bavel and his fellow proponents; it was also prompted by the dozens of draft papers published by psychology researchers over the last few weeks. Many of these examine ideas that might be pandemic-relevant but don't necessarily provide evidence that is particularly robust or guidance on the state of the field. To give policymakers a better idea of which work might be appropriate for policy, the critics suggest adapting a framework used by NASA to establish when a technology is ready to be used. The critics sketch out an analogous idea of “evidence readiness” in psychology, with weak evidence at Level 1 and only research at Level 9 being ready to roll out in a crisis.
To get there, the research needs to reliably be able to show how an intervention works, what its side effects are, and how that effect appears in different large-scale settings. Unfortunately, only a tiny proportion of psychology research even gets past Level 1, write IJzerman and the other critics. The team focused their critiques on their own discipline, but there’s evidence of the same problems operating in a range of other fields, says Stuart Ritchie, a psychologist who has written a book on the problems of evidence quality in science.
Van der Linden agrees with the basic idea of giving evidence a quality score, he says—but not the actual levels that IJzerman and his team have drawn up, because a framework based on rocket science “doesn’t make sense” for psychology. He and a group of colleagues have been working on a different evidence scoring system for pandemic policy that does not lean on NASA's framework, despite the team including NASA Chief Scientist James Green. And there’s already a lot of published work on assessing quality of evidence that IJzerman and his team don’t seem to have used, says van der Linden: “Why are they proposing a framework that’s not relevant to psychology while there’s this whole field that’s already put frameworks out there?”
The claim that psychology is not ready for the crisis is wrong, and possibly even dangerous, argues van Bavel. With conspiracy theories running riot and misinformation spreading rapidly, “there’s no shortage of people trying to fill in the information gap.” Scientists should be communicating the best information they have available at the time, he says. “We don’t want to let the perfect be the enemy of the good.”
But it’s easy to underestimate the problem of weak evidence, says Ritchie. In many cases, low-quality studies can be worse than no study at all, he says, because they can trick you into thinking that something is true when it isn’t, or even lead you in entirely the wrong direction. “If you’re going on really poor-quality evidence, then you might actually be worse than wrong.”
The potential harm of running an intervention based on weak evidence often gets ignored, says Anne Scheel, who researches how to make psychology research more reproducible. Just like with medicine, if a behavioral intervention does actually work, then it should also have side effects. “What’s the worst thing that could happen? We have to take that really seriously,” she says. This doesn’t mean that we should never act on incomplete evidence, because we never have complete evidence, she points out—but the conversation about risk needs to happen in behavioral science just as much as in medical science: “We have to decide if that risk is worth the potential benefits.”
That’s a problem that’s plagued responses to the pandemic, driving disagreements over everything from the best way to avoid economic devastation, to emergency use of potential treatments based on preliminary evidence, to the premature end of clinical trials. Some questions fall clearly on one or the other side of the line—like whether it’s worth avoiding the catastrophe of hundreds of thousands of deaths in the US. But for behavioral science, there’s no consensus on what evidence is good enough to act on, how much weight to put on your uncertainty, and how much uncertainty to communicate. Striking the balance, says van Bavel, is an “existential question” for scientists. “And I don’t know how to solve it.”

0 comments:

Post a Comment