Sun, 08 Sep 2013 02:32 UTC
Sun, 08 Sep 2013 02:32 UTC
The study, by Yale law professor Dan Kahan and his colleagues, has an ingenious design. At the outset, 1,111 study participants were asked about their political views and also asked a series of questions designed to gauge their "numeracy," that is, their mathematical reasoning ability. Participants were then asked to solve a fairly difficult problem that involved interpreting the results of a (fake) scientific study. But here was the trick: While the fake study data that they were supposed to assess remained the same, sometimes the study was described as measuring the effectiveness of a "new cream for treating skin rashes." But in other cases, the study was described as involving the effectiveness of "a law banning private citizens from carrying concealed handguns in public."
The result? Survey respondents performed wildly differently on what was in essence the same basic problem, simply depending upon whether they had been told that it involved guns or whether they had been told that it involved a new skin cream. What's more, it turns out that highly numerate liberals and conservatives were even more - not less - susceptible to letting politics skew their reasoning than were those with less mathematical ability.
But we're getting a little ahead of ourselves - to fully grasp the Enlightenment-destroying nature of these results, we first need to explore the tricky problem that the study presented in a little bit more detail.
Let's start with the "skin cream" version of this brain twister. You can peruse the image below to see exactly what research subjects read (and try out your own skill at solving it), or skip on for a brief explanation:
So do the data suggest that the skin cream works? The correct answer in the scenario above is actually that patients who used the skin cream were "more likely to get worse than those who didn't." That's because the ratio of those who saw their rash improve to those whose rash got worse is roughly 3 to 1 in the "skin cream" group, but roughly 5 to 1 in the control group - which means that if you want your rash to get better, you are better off not using the skin cream at all. (For half of study subjects asked to solve the skin cream problem, the data were reversed and presented in such a way that they did actually suggest that the skin cream works.)
This is no easy problem for most people to solve: Across all conditions of the study, 59 percent of respondents got the answer wrong. That is, in significant part, because trying to intuit the right answer by quickly comparing two numbers will lead you astray; you have to take the time to compute the ratios.
Not surprisingly, Kahan's study found that the more numerate you are, the more likely you are to get the answer to this "skin cream" problem right. Moreover, it found no substantial difference between highly numerate Democrats and highly numerate Republicans in this regard. The better members of both political groups were at math, the better they were at solving the skin cream problem.
But now take the same basic study design and data, and simply label it differently. Rather than reading about a skin cream study, half of Kahan's research subjects were asked to determine the effectiveness of laws "banning private citizens from carrying concealed handguns in public." Accordingly, these respondents were presented not with data about rashes and whether they got better or worse, but rather with data about cities that had or hadn't passed concealed carry bans, and whether crime in these cities had or had not decreased.
Overall, then, study respondents were presented with one of four possible scenarios, depicted below with the correct answer in bold:
The opposite was true for highly numerate conservative Republicans: They did just great when the right answer was that the ban didn't work (version D), but poorly when the right answer was that it did (version C).
Here are the results overall, comparing subjects' performances on the "skin cream" versions of the problem (above) and the "gun ban" versions of the problem (below), and relating this performance to their political affiliations and numeracy scores:
So what are smart, numerate liberals and conservatives actually doing in the gun control version of the study, leading them to give such disparate answers? It's kind of tricky, but here's what Kahan thinks is happening.
Our first instinct, in all versions of the study, is to leap instinctively to the wrong conclusion. If you just compare which number is bigger in the first column, for instance, you'll be quickly led astray. But more numerate people, when they sense an apparently wrong answer that offends their political sensibilities, are both motivated and equipped to dig deeper, think harder, and even start performing some calculations - which in this case would have led to a more accurate response.
"If the wrong answer is contrary to their ideological positions, we hypothesize that that is going to create the incentive to scrutinize that information and figure out another way to understand it," says Kahan. In other words, more numerate people perform better when identifying study results that support their views - but may have a big blind spot when it comes to identifying results that undermine those views.
What's happening when highly numerate liberals and conservatives actually get it wrong? Either they're intuiting an incorrect answer that is politically convenient and feels right to them, leading them to inquire no further - or else they're stopping to calculate the correct answer, but then refusing to accept it and coming up with some elaborate reason why 1 + 1 doesn't equal 2 in this particular instance. (Kahan suspects it's mostly the former, rather than the latter.)
The Scottish Enlightenment philosopher David Hume famously described reason as a "slave of the passions." Today's political scientists and political psychologists, like Kahan, are now affirming Hume's statement with reams of new data. This new study is just one out of many in this respect, but it provides perhaps the most striking demonstration yet of just how motivated, just how biased, reasoning can be - especially about politics.
( No Comments )