The way we social science now

This is a fun story. Jeff pointed me to a post on the sister blog by Christopher Hare and Robert Lupton, entitled “No, Sanders voters aren’t more conservative than Clinton voters. Here’s the data.”

My reaction: “Who would ever think that Sanders supporters are more conservative than Clinton supporters? That’s counterintuitivism gone amok.”

It turned out that Hare and Lupton were responding to a recent newspaper op-ed by Chris Achen and Larry Bartels, who had written:

In a survey conducted for the American National Election Studies in late January, supporters of Mr. Sanders were more pessimistic than Mrs. Clinton’s supporters about “opportunity in America today for the average person to get ahead” and more likely to say that economic inequality had increased.

However, they were less likely than Mrs. Clinton’s supporters to favor concrete policies that Mr. Sanders has offered as remedies for these ills, including a higher minimum wage, increasing government spending on health care and an expansion of government services financed by higher taxes.

Hare and Lupton argue that Achen and Bartels had made a mistake in their data analysis:

The study asked Democrats, independents, and Republican respondents alike to say which Democratic primary candidate they preferred: Hillary Clinton, Martin O’Malley, Bernie Sanders, “another Democratic candidate,” or none.

More than twice as many Republican respondents chose Sanders as chose Clinton.

That means that in analyzing this group of Sanders “supporters,” Achen and Bartels were examining a group that may well have been farther to the right than actual Sanders voters. We don’t believe that the ANES Republican respondents were actually Sanders backers. We think it’s far more likely that they just strongly dislike Hillary Clinton.

Hare and Lupton look at several issue attitudes (ugly-ass bar charts but, hey, no one’s perfect) and conclude:

On most issues, actual Sanders supporters – Democrats and independents who voted for him or would be likely to, as opposed to Republicans who are holding their noses and selecting the Democrat they dislike least – are indeed to the left of Clinton supporters. Primary voters are in fact able to pick the candidate whose positions they find most ideologically compatible. And that lines up more accurately with other scholarly evidence.

I’m with Hare and Lupton on the substance, as I’ve said before in other contexts (for example in my disagreement with Bartels’s claim that flashing a subliminal smiley face on a computer screen can induce big changes in attitudes toward immigration), but I think they’re being a bit unfair in titling their post. As far as I could tell, Achen and Bartels never claimed that Sanders voters are more conservative than Clinton voters. Achen and Bartels’s main point was that the Sanders/Clinton choice is predicted more from demographics than from issue attitudes:

Exit polls conducted in two dozen primary and caucus states from early February through the end of April reveal only modest evidence of ideological structure in Democratic voting patterns, but ample evidence of the importance of group loyalties.

Mr. Sanders did just nine points better, on average, among liberals than he did among moderates. By comparison, he did 11 points worse among women than among men, 18 points worse among nonwhites than among whites and 28 points worse among those who identified as Democrats than among independents.

It is very hard to point to differences between Mrs. Clinton and Mr. Sanders’s proposed policies that could plausibly account for such substantial cleavages. They are reflections of social identities, symbolic commitments and partisan loyalties.

This seems reasonable enough, indeed it’s broadly consistent with a “reasoning voter” model, in that Clinton and Sanders really aren’t so far apart on issues. This is related to the general point that primary elections are hard to predict.

So, overall, I don’t think Achen/Bartels and Hare/Lupton are that far apart in their analysis of the Clinton-Sanders divide, even though the two pairs of political scientists are coming from much different perspectives.

This is social science

What interests me most about this story, though, is not the content but the medium. This is a political science debate, conducted by serious academic researchers using real data, and it’s taking place in the newspapers: in particular, a New York Times online op-ed and a Washington Post blog.

Traditionally we in academia have thought of op-eds as a way to publicize our research: we do a study, write a paper, publish the journal article, and then try (usually without much success) to get some news coverage or to get some op-eds. I remember with Red State Blue State we wrote the research article, we wrote the academic book, then we notified journalists and we wrote some newspaper articles—not a lot, but I did get something in the Wichita Eagle, I think, which was appropriate because we did have some Kansas material in our book.

Nowadays, though, more and more, we don’t bother with the research article. What’s the point of knocking yourself out writing a jargon-free academic article, then struggling through a years-long referee and revision process, then finally reaching success! and finding that nobody reads the journal anyway. I’d rather post—that is, publish—my findings here and on the sister blog, thus reaching more people than would read the journal and getting useful comments to boot. Even when we do write academic papers, we’ll still typically present our work online first, to get the ideas out there and to get immediate feedback.

That’s what Achen and Bartels did. Actually they published in online the New York Times which has more prestige and, I assume, much more readership than we get here, but it’s the same general idea.

And then Hare and Lupton read that op-ed, did their own analysis, and published right back. This is great. What would’ve taken two years using traditional academic publishing, took only one week under this new regime.

As with old-style publishing, there remains a link-forward problem: Anyone who reads Hare and Lupton can click to read Achen and Bartels, but the reverse does not hold. Of the thousands of people who read Achen and Bartels when it came out, most will never see Hare and Lupton. It would make sense for Achen and Bartels to add a link to their article so that future readers can see Hare and Lupton’s critique, but (a) I don’t know if the Times likes to add links a week after an article appears, and (b) Achen and Bartels might not want to link to criticism. Perhaps, though, they will do the bloggy good thing of following up with a post discussing Hare and Lupton.

Under the old system, Achen and Bartels could’ve submitted a paper to a journal, and it’s possible that the problem with including Republicans in the data analysis would’ve been caught in the review process. So that would’ve been good. More likely, though, they never would’ve submitted this as is. Rather, they would’ve had to bundle it with some other findings. One thing that’s not so much discussed when it comes to academic publishing is the importance of framing and packaging: You need to make the case that what you’ve done is a big deal. On balance I don’t know if this need to package is good or bad. It’s bad in that it encourages hype, and it discourages the publication of solid but not counterintuitive results. But it’s good in that it pushes researchers to place their work in context and to think about the big picture. Maybe all this blogging is not so good for my own research, for example; I don’t know.

From this perspective, the modern practice of publishing research in the newspaper and not in scholarly journals is just a continuation of a trend toward shorter, less contextual pieces. Little snippets of research that need to be gathered up and synthesized.

Anyway, I think this is an interesting case study. It’s a great example of post-publication review. I’m glad that Achen and Bartels published their findings right away, giving Hare and Lupton a chance to correct some of their analysis (even if, as I said, I think Hare and Lupton caricatured Achen and Bartels’s work a bit). All on the foundation of shared, public data.

But the new system is far from perfect. Two problems are access and a bias toward findings that are counterintuitive (i.e., often false). Access: If you’re Andrew Gelman or Tyler Cowen or one of our friends, you can post your results whenever and as often as you’d like. Chris Achen and Larry Bartels are pretty well connected too and were able to post at the New York Times. Other researchers might have a more difficult time getting people to read their papers. Yes, academic journals have gatekeepers too but of a different sort. Bias toward the counterintuitive: Newspapers want to publish news, so it’s a plus to make a claim that at first might seem surprising. As noted, this is also a bias toward error.

In any case, this is the mode of publication we’re moving toward, so it’s good for us to understand its strengths and weaknesses, in order to do better. For example, the Monkey Cage has made efforts both to widen access and to favor the true over the sensational.

P.S. Achen and Bartels follow up, pointing out some problems in Hare and Lupton’s analysis. I like to see these responses by blog; this is so much cleaner than having to fight with journals to get letters published.

8 thoughts on “The way we social science now

  1. The second half of this post makes a similar point to one I’ve been discussing with colleagues recently: “refereed archival journals” are less relevant every day… as are books and manuals of all kinds. I’m not saying they’re not relevant or important at all, just that they are already far less important than they used to be and the trend is continuing for now. When I’m looking an algorithm or modeling approach, I’ll happily accept it from a blog post or “white paper” or even a presentation someone has posted, if it seems credible. Sometimes I do more poking around to see if my initial judgment of credibility is justified. A journal article does immediately confer a fairly high level of credibility, but lots of other things can also confer credibility.

  2. One problem of conducting a serious academic debate in the medium of the NYT is the quality of the readership & commenters. There’s a degree of “playing to the gallery” that can take place and the debater or conclusion the rabble cheers may not be the outcome we want.

      • But are we saying that the average quality of judgement & feedback provided by the cohort of NYT commentators is the same as the PhD’s from the technical field under debate? If so, we must be producing particularly crappy PhD holders. (e.g. I’m sure I get better feedback on my reactor-design from Chem. Eng. PhDs than random people on the street)

        Or are we saying that the journal referees suffer from a particularly bad adverse selection bias compared to the PhD holders from the field?

  3. And here’s the opposite view from Duncan Watts on why blog posting of research is a problem (a big problem) versus peer review:

    http://online.liebertpub.com/doi/pdfplus/10.1089/big.2014.1521

    “There’s a world of difference between having to sort out the problems in your argument before you are allowed to publish it and having to satisfy people who know something, and writing whatever you can get away with and then have people pile in afterward. Once that content is out there, it is out there, right? Let us say that what the author said was wrong, and commenter number 350 points that out. Will anyone else notice? Will the author even notice? If she does, is she under any obligation to respond, or to correct her text? What if she disagrees with the commenter? What if her mistake makes her look bad? What if she’s just too busy writing her next piece to take the time? I don’t think any of these questions have been answered in even a close to satisfactory way”. p.61

    • Lkt:

      Maybe Duncan is right, but I don’t think we’re going back to the old peer-reviewed system in any case.

      And, regarding his question, “Will anyone else notice? Will the author even notice? If she does, is she under any obligation to respond, or to correct her text? What if she disagrees with the commenter? What if her mistake makes her look bad? What if she’s just too busy writing her next piece to take the time?,” just read this story which is one of many many examples illustrating the difficulty of publishing criticisms and obtaining data for replication.

      Whether it’s the New York Times or the American Sociological Review, you’re not likely to see the correction published in the same place where the original error was published.

    • Was that interview peer reviewed?

      I gave up when I followed the link to Mary Ann Liebert, Inc., and saw this:

      PAY PER VIEW Big Data – 2(2):57-62; An Interview with Duncan Watts (access for 24 hours for US $51.00)

      No, thank you.

Leave a Reply

Your email address will not be published. Required fields are marked *