A few years ago I noted the following quote from applied microeconomist Steven Levitt:
Is it surprising that scientists would try to keep work that disagrees with their findings out of journals? When I told my father that I [Levitt] was sending my work saying car seats are not that effective to medical journals, he laughed and said they would never publish it because of the result, no matter how well done the analysis was. (As is so often the case, he was right, and I eventually published it in an economics journal.)
Within the field of economics, academics work behind the scenes constantly trying to undermine each other. I’ve seen economists do far worse things than pulling tricks in figures. When economists get mixed up in public policy, things get messier.
At the time, I expressed dismay about Levitt’s air of (as I read it) amused, world-weary tolerance of scientists behaving against the interest of science. But I took his story about the car seats at face value.
But now I’m not so sure about the car seats story. Joseph Delaney looked at Levitt’s paper (coauthored with Joseph Doyle and appearing in 2008 in Economic Inquiry), along with a 2006 paper by Michael Elliott, Michael Kallan, Dennis Durbin, and Flaura Winston in Archives of Pediatrics and Adolescent Medicine, and a followup by Kristy Arbogast, Jessica Jermakian, Michael Kallan, and Dennis Durbin that appeared in Pediatrics in 2009.
The other research teams found a protective effect of child car seats, but Doyle and Levitt did not.
So what is different? Well, the complete interview data is a hint as to what could be happening differently. It is very hard to publish a paper in medical journal using weaker data than that present elsewhere. Even more interestingly, [Elliott et al., 2006] found protective associations. . . . Doyle and Levitt has Elliott et al. as a reference, but still claim that they are the first to consider this issue:
This study provides the first analysis of the relative effectiveness of seat belts and child safety seats in preventing injury based on representative samples of police-reported crash data.
So now let us consider reasons that a medical journal may have had issues with this paper. First, it does not seem to deal with the previous literature well. Second, it doesn’t explain why crash testing results do not seem to translate into actual reduction in events. It might be due to misuse of the equipment, but it is not clear to me what the conclusion should be then.
So is the explanation Levitt’s father gave possible? Yes. But far more likely was the difficulty of jumping into a field with a high counter-intuitive claim and hoping for an immediate high impact publication. Medical journals are used to seeing experiments (randomized controlled drug trials, for example) overturn otherwise compelling observational data. So it isn’t a mystery why the paper had trouble with reviewers and it does not require any conspiracy theories about public health researchers not being open to new ideas or to data.
More generally, this is a tough question, the role of outsiders in research. Ideally, Levitt and Doyle would collaborate with experts in traffic safety and epidemiology; this might give them a better sense of the research and data in this area. On the other hand, it’s not always so easy to form such working relationships. In this case, I wonder what would’ve happened if Doyle and Levitt, before submitting their paper to a journal, had sat around a table with Michael Elliott and Kristy Arbogast and tried to understand how their estimates differed.