It all began with this message from Dan Kahan, a law professor who does psychology experiments:
My graphs– what do you think??
I guess what do you think of the result too, but the answer is, “That’s obvious!” If it hadn’t been, then it would have been suspicious in my book. Of course, if we had found the opposite result, that would have been “obvious!” too. We are submitting to LR ≠1 JournalThis is the latest study in series looking at relationship between critical reasoning capacities and “cultural cognition” — the tendency of individuals to conform their perceptions of risk & other policy-relevant facts to their group commitments. The first installment was an observational study that found that cultural polarization (political too; the distinction relate not to the mechanism for polarization over decision-relevant science but only about how to measure what is hypothesized to be driving it) increases as people become more science literate. This paper and another one that looked at how “cognitive reflection” magnify this kind of biased processing of evidence are all experimental followups aimed at testing the conjecture that the phenomenon is a consequence of *too much* rationality, not *too little*, in an environment in which people have a bigger stake in forming group-congruent beliefs than truth-congruent ones.That’s our “science communication environment,” sadly. The conditions that have degraded it in this way — and that generate such an appalling and despicable deformation of our reason — are a bigger threat to our species than any of the particular risks (climate change, nuclear power, guns, HPV vaccine, etc) that we are fighting about….
Why does public conflict over societal risks persist in the face of compelling and widely accessible scientific evidence? We conducted an experiment to probe two alternative answers: the “Science Comprehension Thesis” (SCT), which identifies defects in the public’s knowledge and reasoning capacities as the source of such controversies; and the “Identity-protective Cognition Thesis” (ICT), which treats cultural conflict as disabling the faculties that members of the public use to make sense of decision-relevant science. In our experiment, we presented subjects with a difficult problem that turned on their ability to draw valid causal inferences from empirical data. As expected, subjects highest in Numeracy—a measure of the ability and disposition to make use of quantitative information—did substantially better than less numerate ones when the data were presented as results from a study of a new skin-rash treatment. Also as expected, subjects’ responses became politically polarized—and even less accurate—when the same data were presented as results from the study of a gun-control ban. But contrary to the prediction of SCT, such polarization did not abate among subjects highest in Numeracy; instead, it increased. This outcome supported ICT, which predicted that more Numerate subjects would use their quantitative-reasoning capacity selectively to conform their interpretation of the data to the result most consistent with their political outlooks. We discuss the theoretical and practical significance of these findings.
This is indeed consistent with the red-state-blue-state constellation of findings, in which political polarization shows up more in higher-income groups.
Incidentally, I find this whole area of political polarization research (my own red-blue work but also the work of others) to be a fascinating example illustrating the value of a “statistical” or “descriptive” approach to social science, in which researchers (such as myself) go around looking for patterns in data. This approach to research can be distinguished from the so-called “empirical implications of theoretical models” (EITM) approach, which is a Popperian world in which researchers form very specific falsifiable hypotheses and test them. My impression is that EITM is the dominant paradigm in social science (consider, for example, all those psychology researchers who indignantly insist that, no, they were not fishing thru their data and, yes, they had formulated their hypotheses ahead of time). Often, in our criticisms of sloppy research, people assume we’re working within the EITM paradigm and that we’re just saying that people are doing things wrong (e.g., by not preregistering their research hypotheses).
But really my (and, I think, your) criticisms are more fundamental than this. It’s related to what you describe as the problem with so-called tabloid research. On one hand, the researcher has to claim that the result is an amazing surprise, hence it’s big news. On the other hand, the researcher has to claim that the result follows logically from substantive theory, indeed that the result was anticipated from theory, hence it’s no big deal at all, just another confirmation of the fundamental politically-incorrect truths of evolutionary biology (or whatever).
To which Kahan responded:
1. Interesting on the link w/ income. I have a question for you on that — but it requires a bit of a set up…There are a collection of characeristics that are associated with intensity of polarization that seem independent of the strength of partisan id. The contribution that intensity of partisan id makes to “polarization” is uninteresting, b/c essentially tautological (people who “feel more strongly” about politics are “more divided”– of course). What’s more puzzling is why people who are comparably opposed in partisanship disagree more intensely conditional on other factors– such as income, as you find.Indeed, one of the central questions, as I see it, is *whether* these other factors *are* in fact only indicators of partisan id, which is best viewed as some latent or unobservable disposition itself only imperfectly measured by the observable response someone will give when asked to respond the standard 5- or 7-point measures of party self-identification or liberal-conservative ideology. If so, they help us measure partisanship, but don’t explain “magnified polarization” as I’ve defined it. Indeed, if everyone of the factors can be explained that way, they make the puzzle go away. But once we have identified the “magnification” factors that really *aren’t* just indicators of partisanship, then those can play key roles as we test hypotheses about what really does explain the “polarization magnification” phenomenon.E.g., as you know, in political science it has long been recognized that “political knowledge/sophistication” sharpens the relationship between self-reported ideology or party self-identification and positions on various issues (and hence the extent of polarization; this actually figured in an exchange that you & I had about ideological “coherence” of positions across issues).Is this b/c “political knowledge” measures how reliably individuals can make sense of information relevant to determining policy positions that “fit” their political outlooks? In that case, “greater polarization” conditional on partisanship reflects the more accurate evaluation of policy-significant information. Cf. Zaller. Or is it b/c “political knowledge” — which is really just a sort of collection of high-school-level civics-test items — is a measure of intensity of partisan engagement with issues that helps to compensate for the measurement imprecision of the standard party self-id & ideology measures? If the latter, it doesn’t explain “heightened polarization” so much as suggest that the perception of it is an artifact of the typical measures of partisan id. Cf. Fiorina.Recently, political scientists have begun to debate whether partisanship actually *distorts* or biases information processing. That is the premise of Lodge & Taber’s work, which they summarize in their new book The Rationalizing Voter. They have lots of results showing that partisans who score higher on “political knowledge” are more likely to opportunistically adjust the weight they give to evidence in patterns that reflect or reinforce conclusions that are congenial to their political outlooks. Because this is not consistent with the position that “political knowledge” amplifies polarization by improving the accuracy of citizens’ assessment of how policies advance their political values (Zaller’s view), L&T treat it as evidence of the “irrationality” of partisanship. Superimposing on their results, in a cookie-cutter fashion, cognitive psychology’s “heuristic-driven vs. reflective”/”System 1 vs. System 2″ dual-process reasoning framework, they treat “political knowledge” as a measure of partisanship, which they posit interacts with “confirmation bias,” one of the myriad deficiencies in reasoning associated with System 1.I think that’s incorrect. There is overwhelming evidence that the phenomenon of politically motivated reasoning — the opportunistic weighting of evidence in patterns that reflect and reinforce ideologically or culturally congenial beliefs — is *amplified* by dispositions associated with use of “System 2″ reasoning dispositions–the ones opposed to indulging “fast,” “heuristic” sorts of thinking. The phenomenon of politically motivated reasoning can’t be plausibly be viewed as originating in “bounded rationality,” as L&T & a legion of scholars who study science communication assert, because more “boundedly rational” people display this pattern of cognition less powerfully than those who are most disposed to think reflectively rather than heuristically.Our account for this is that politically motivated reasoning *is* rational. To know whether a style of reasoning is “rational,” one has to know what people are trying to do by engaging information. When positions on issues become understood as badges of membership in & loyalty to competing cultural groups, then individuals will have a stake in forming identity-supportive beliefs that dominates their stake in forming evidence-justified beliefs. What an individual believes about climate change won’t have any impact on the climate or on policies to address whatever risks climate change poses; his or her perosnal behavior as consumer, as voter, as participant in public discussion, etc., will be too inconsequential to matter. So anything he or she does as a result of a “mistaken” view of what evidence signifies will not affect the level of risk he or she or anyone he or she cares about faces. But if that person makes a “mistake” in the position he or she adopts in relation positions that signify his or her character & reliability to peers, then the consequences for that person (loss of trust, shunning etc.) can be devastating. So it makes sense — is rational at an individual level — for people to engage information in a manner geared more reliably to promoting beliefs in line with those that predominate in their group than with ones supported by the best available evidence. Even minimally sophisticated people can do this well enough. But those who are proficient in critical reasoning can do an even better job, b/c they can more effectively identify evidence supportive of their group’s position and explain away evidence hostile to it. They thus end up even more polarized.This is the gist of a series of studies that we have done now. One was an observational study showing that polarization is highest among persons who are highest in science comprehension (measured w/ a scale that combined science literacy & quantitative reasoning ability). The next two were experimental, and were designed to “catch in the act” the contribution that critical reasoning dispositions of one or another sort make to the stake people have in persisting in in beliefs consistent with their group identity. The results of these experiments help to corroborate that people more proficient in critical reasoning are more polarized because they are using their skills and capacities to promote identity-supportive beliefs…. These are the two papers I send you, I think, in the last msg.Of course, to say politically motivated reasoning is “individually rational” is not to say it is morally desirable. In fact, this state of affairs awful, for if everyone engages in this style of rasoning simultaneously, then citizens in a pluralistic democratic society are less likely to converge in their understandings of decision-relevant science essential to their collective well-being. The prospect of that happening, of course, doesn’t change any individual’s psychic incentive to process information in an identity-supportive rather than truth-discerning manner. This is the “tragedy of the science communication commons….”Where does “political knowledge” fit in? My sense is that it generates the results L&T observe not b/c it is a measure of partisanship but b/c it correlates with & is thus an indicator of critical reasoning dispositions. Of course, those dispositions aren’t being used in the way that Zaller and others think– to help people more accurately discern policies that promote their values; it is being used to identify positions that more reliably express their identity but that might well undermine their values if they are in fact contrary to the best available evidence…. I should figure out how to test this conjecture about why political knowledge amplifies polarization….Now, how do you understand the correlation between income & greater polarization to fit in? Income might be correlated with ability to accurately assess how positions promote values — the conventional (Zaller) account of why political knowledge predicts a greater connection between partisan id & polarization. Or maybe it is correlated with critical reasoning dispositions of the sort that our studies enable expressive rationality–an engagement of information aimed at maximizing congruence between belief & political or cultural identity. Or perhaps it is correlated with intensity of partisanship? In that case, it is only a pseudo- “polarization magnifier” (because of course people who disagree more on values disagree more about policies that correspond to those values).Question: Would you predict that *income* predicts intensification of the sort of identity-supportive reasoning that we measure in our experiments? I.e., if I substituted income for Numeracy as a predictor in the “Motivated Numeracy” experiment, would I see the same result (more dramatic political distortion of accuracy in covariance-detection conditional on income)?! I’m not sure what to think!!2. Geez, is it realistic to expect you go continue reading at this point–or to imagine you even had the time, patience, interest, forbearance, etc. to make through that mini-treatise?! But maybe you can infer from all that my response to your questions? I am “Popperian” in the sense you describe. And I think you are too, despite characterizing yourself as an exploratory model builder. You aren’t aren’t just collecting facts in some unmotivated manner & imagining *causal relationships* will magically jump out of massive piles of data. You are *choosing* to model political behavior & attitudes in a manner that reflects — or will shed light on — competing understandings of what the wellsprings of those are. Surely Red State, Blue State was like that; you were curious whether claims people make about the disconnect between voting behavior and “interests” — and the contribution that “values” might play in that — were right. It wasn’t an accident that you slapped down Haidt on this issue not so far back. You had the very claims he (along w/ lots of others; “What’s the Matter w/ Kansas?” trope) was making in mind as one of things to take into account in structuring your models. No? The problem w/ WTF isn’t that it is theory driven. It is that (a) it is faux-theory driven, (b) divorced from investigation of competing plausible conjectures, and (c) distorted by the impact that “mindless statistics” — the package of rituals associated with NHT — has had in displacing “valid causal inference” in structuring how statistics are used to guide and disipline empirical investigation.
I don’t really have anything to add right now. I just thought this was worth blogging in part to convey our current state of confusion. So much of published writing is issued with such an air of authority, it seemed useful to me to present a discussion in this form in which various important issues are left hanging.
Feel free to offer your thoughts in the comments, both on the topic of political polarization and on the importance (or lack thereof) of the “empirical implications of theoretical models” perspective, which I see as currently dominant in social science. The link between these two topics here is Kahan’s suggestion of experiments that could be designed to discriminate between (or, more generally, refine) different existing theories of political polarization.
My impression is that a lot of published science is nominally about empirical implications of theoretical models, in the sense that the researchers say they have a hypothesis that they are testing—but really what is going in is that they have a hypothesis that they are seeking to confirm via a significance test. For example, Daryl Bem has a hypothesis that ESP has large effects, and so he designs experiments to confirm that hypothesis, to nail down via a definitive experiment what he already feels he’s observed less systematically. But Kahan’s idea above is a bit different in that he’d like to design an experiment to learn something new (see his “I’m not sure what to think!!” just above). I’m not quite sure how to classify these different sorts of scientific investigations, but I think the generic pathway (scientific hypothesis . . . statistical hypothesis . . . data collection . . . statistical analysis) is too broad, and that it maps too crudely into our Popperian/Lakatosian notions of the progress of scientific understanding.