A Blog by Jonathan Low

 

Sep 24, 2013

Political Views Negatively Affect Humans' Ability to Reason Mathematically

Yes, politics makes us irrational, incomprehensible, emotional and, frequently, crazy.

New research unveils that it also negatively impacts our ability to reason. And do math. The problem is similar to one involving cognitive dissonance, the inability to accept an answer if it is contrary to one's beliefs.

In the case of math, it appears that numeracy, the facility with which we do sums, can be altered if it turns out that the result at which we are destined to arrive somehow contradicts or undermines a firmly held tenet of political ideology.

Anyone who watches political debates or TV shows featuring avatars of the right or left has probably, at one time or another, grabbed their head in pain at what appears to be the astoundingly wrongful answer provided by a political guru of one sort or the other who is 'interpreting' data derived from a poll or election result or other ostensibly objective data set. The research suggests that such a person is not always just a willful liar. They may, in fact, actually believe what they are saying because their emotions and beliefs do not permit them to 'see' any other conclusion.

There will be those who dismiss this scientific approach and cling to their insistent view that everyone in politics, especially those who share opposing views, is an idiot. Some of them may well be, but the research suggests they may just be more emotionally involved than is healthy - or rational. JL

Chris Mooney reports in Mother Jones:

Farewell, Enlightenment: New research suggests that people even solve math problems differently if their political ideology is at stake
Everybody knows that our political views can sometimes get in the way of thinking clearly. But perhaps we don't realize how bad the problem actually is. According to a new psychology paper, our political passions can even undermine our very basic reasoning skills. More specifically, the study finds that people who are otherwise very good at math may totally flunk a problem that they would otherwise probably be able to solve, simply because giving the right answer goes against their political beliefs.
The study, by Yale law professor Dan Kahan and his colleagues, has an ingenious design. At the outset, 1,111 study participants were asked about their political views and also asked a series of questions designed to gauge their "numeracy," that is, their mathematical reasoning ability. Participants were then asked to solve a fairly difficult problem that involved interpreting the results of a (fake) scientific study. But here was the trick: While the fake study data that they were supposed to assess remained the same, sometimes the study was described as measuring the effectiveness of a "new cream for treating skin rashes." But in other cases, the study was described as involving the effectiveness of "a law banning private citizens from carrying concealed handguns in public."
The result? Survey respondents performed wildly differently on what was in essence the same basic problem, simply depending upon whether they had been told that it involved guns or whether they had been told that it involved a new skin cream. What's more, it turns out that highly numerate liberals and conservatives were even more—not less—susceptible to letting politics skew their reasoning than were those with less mathematical ability.
But we're getting a little ahead of ourselves—to fully grasp the Enlightenment-destroying nature of these results, we first need to explore the tricky problem that the study presented in a little bit more detail.
Survey respondents were presented with a fictional study purporting to assess the effectiveness of a new skin cream, and informed at the outset that "new treatments often work but sometimes make rashes worse" and that "even when treatments don't work, skin rashes sometimes get better and sometimes get worse on their own." They were then presented with a table of experimental results, and asked whether the data showed that the new skin cream "is likely to make the skin condition better or worse." So do the data suggest that the skin cream works? The correct answer in the scenario above is actually that patients who used the skin cream were "more likely to get worse than those who didn't." That's because the ratio of those who saw their rash improve to those whose rash got worse is roughly 3:1 in the "skin cream" group, but roughly 5:1 in the control group—which means that if you want your rash to get better, you are better off not using the skin cream at all. (For half of study subjects asked to solve the skin cream problem, the data were reversed and presented in such a way that they did actually suggest that the skin cream works.)
This is no easy problem for most people to solve: Across all conditions of the study, 59 percent of respondents got the answer wrong. That is, in significant part, because trying to intuit the right answer by quickly comparing two numbers will lead you astray; you have to take the time to compute the ratios.
Not surprisingly, Kahan's study found that the more numerate you are, the more likely you are to get the answer to this "skin cream" problem right. Moreover, it found no substantial difference between highly numerate Democrats and highly numerate Republicans in this regard. The better members of both political groups were at math, the better they were at solving the skin cream problem.
But now take the same basic study design and data, and simply label it differently. Rather than reading about a skin cream study, half of Kahan's research subjects were asked to determine the effectiveness of laws "banning private citizens from carrying concealed handguns in public." Accordingly, these respondents were presented not with data about rashes and whether they got better or worse, but rather with data about cities that had or hadn't passed concealed carry bans, and whether crime in these cities had or had not decreased.
So how did people fare on the handgun version of the problem? They performed quite differently than on the skin cream version, and strong political patterns emerged in the results—especially among people who are good at mathematical reasoning. Most strikingly, highly numerate liberal Democrats did almost perfectly when the right answer was that the concealed weapons ban does indeed work to decrease crime (version C of the experiment)—an outcome that favors their pro-gun-control predilections. But they did much worse when the correct answer was that crime increases in cities that enact the ban (version D of the experiment). The opposite was true for highly numerate conservative Republicans: They did just great when the right answer was that the ban didn't work (version D), but poorly when the right answer was that it did (version C).
For study author Kahan, these results are a fairly strong refutation of what is called the "deficit model" in the field of science and technology studies—the idea that if people just had more knowledge, or more reasoning ability, then they would be better able to come to consensus with scientists and experts on issues like climate change, evolution, the safety of vaccines, and pretty much anything else involving science or data (for instance, whether concealed weapons bans work). Kahan's data suggest the opposite—that political biases skew our reasoning abilities, and this problem seems to be worse for people with advanced capacities like scientific literacy and numeracy. "If the people who have the greatest capacities are the ones most prone to this, that's reason to believe that the problem isn't some kind of deficit in comprehension," Kahan explained in an interview. So what are smart, numerate liberals and conservatives actually doing in the gun control version of the study, leading them to give such disparate answers? It's kind of tricky, but here's what Kahan thinks is happening.
Our first instinct, in all versions of the study, is to leap instinctively to the wrong conclusion. If you just compare which number is bigger in the first column, for instance, you'll be quickly led astray. But more numerate people, when they sense an apparently wrong answer that offends their political sensibilities, are both motivated and equipped to dig deeper, think harder, and even start performing some calculations—which in this case would have led to a more accurate response.
"If the wrong answer is contrary to their ideological positions, we hypothesize that that is going to create the incentive to scrutinize that information and figure out another way to understand it," says Kahan. In other words, more numerate people perform better when identifying study results that support their views—but may have a big blind spot when it comes to identifying results that undermine those views.
What's happening when highly numerate liberals and conservatives actually get it wrong? Either they're intuiting an incorrect answer that is politically convenient and feels right to them, leading them to inquire no further—or else they're stopping to calculate the correct answer, but then refusing to accept it and coming up with some elaborate reason why 1 + 1 doesn't equal 2 in this particular instance. (Kahan suspects it's mostly the former, rather than the latter.)
The Scottish Enlightenment philosopher David Hume famously described reason as a "slave of the passions." Today's political scientists and political psychologists, like Kahan, are now affirming Hume's statement with reams of new data. This new study is just one out of many in this respect, but it provides perhaps the most striking demonstration yet of just how motivated, just how biased, reasoning can be—especially about politics.

0 comments:

Post a Comment