Skip to content

Scientists behaving badly

Steven Levitt writes:

My view is that the emails [extracted by a hacker from the climatic research unit at the University of East Anglia] aren’t that damaging. Is it surprising that scientists would try to keep work that disagrees with their findings out of journals? When I told my father that I was sending my work saying car seats are not that effective to medical journals, he laughed and said they would never publish it because of the result, no matter how well done the analysis was. (As is so often the case, he was right, and I eventually published it in an economics journal.)

Within the field of economics, academics work behind the scenes constantly trying to undermine each other. I’ve seen economists do far worse things than pulling tricks in figures. When economists get mixed up in public policy, things get messier. So it is not at all surprising to me that climate scientists would behave the same way.

I have a couple of comments, not about the global-warming emails–I haven’t looked into this at all–but regarding Levitt’s comments about scientists and their behavior:

1. Scientists are people and, as such, are varied and flawed. I get particularly annoyed with scientists who ignore criticisms that they can’t refute. The give and take of evidence and argument is key to scientific progress.

2. Levitt writes, about scientists who “try to keep work that disagrees with their findings out of journals.” This is or is not ethical behavior, depending on how it’s done. If I review a paper for a journal and find that it has serious errors or, more generally, that it adds nothing to the literature, then I should recommend rejection–even if the article claims to have findings that disagree with my own work. Sure, I should bend over backwards and all that, but at some point, crap is crap. If the journal editor doesn’t trust my independent judgment, that’s fine, he or she should get additional reviewers. On occasion I’ve served as an outside “tiebreaker” referee for journals on controversial articles outside of my subfield.

Anyway, my point is that “trying to keep work out of journals” is ok if done through the usual editorial process, not so ok if done by calling the journal editor from a pay phone at 3am or whatever.

I wonder if Levitt is bringing up this particular example because he served as a referee for a special issue of a journal that he later criticized. So he’s particularly aware of issues of peer review.

3. I’m not quite sure how to interpret the overall flow of Levitt’s remarks. On one hand, I can’t disagree with the descriptive implications: Some scientists behave badly. I don’t know enough about economics to verify his claim that academics in that field “constantly trying to undermine each other . . . do far worse things than pulling tricks in figures”–but I’ll take Levitt’s word for it.

But I’m disturbed by the possible normative implications of Levitt’s statement. It’s certainly not the case that everybody does it! I’m a scientist, and, no, I don’t “pull tricks in figures” or anything like this. I don’t know what percentage of scientists we’re talking about here, but I don’t think this is what the best scientists do. And I certainly don’t think it’s ok to do so.

What I’m saying is, I think Levitt is doing a big service by publicly recognizing that scientists sometimes–often?–do unethical behavior such as hiding data. But I’m unhappy with the sense of amused, world-weary tolerance that I get from reading his comment.

Anyway, I had a similar reaction a few years ago when reading a novel about scientific misconduct. The implication of the novel was that scientific lying and cheating wasn’t so bad, these guys are under a lot of pressure and they do what they can, etc. etc.–but I didn’t buy it. For the reasons given here, I think scientists who are brilliant are less likely to cheat.

4. Regarding Levitt’s specific example–he article on car seats that was rejected by medical journals–I wonder if he’s being too quick to assume that the journals were trying to keep his work out because it disagreed with previous findings.

As a scientist whose papers have been rejected by top journals in many different fields, I think I can offer a useful perspective here.

Much of what makes a paper acceptable is style. As a statistician, I’ve mastered the Journal of the American Statistical Association style and have published lots of papers there. But I’ve never successfully published a paper in political science or economics without having a collaborator in that field. There’s just certain things that a journal expects to see. It may be comforting to think that a journal will not publish something “because of the result,” but my impression is that most journals like a bit of controversy–as long as it is presented in their style. I’m not surprised that, with his training, Levitt had more success publishing his public health work in econ journals.

P.S. Just to repeat, I’m speaking in general terms about scientific misbehavior, things such as, in Levitt’s words, “pulling tricks in figures” or “far worse things.” I’m not making a claim that the scientists at the University of East Anglia were doing this, or were not doing this, or whatever. I don’t think I have anything particularly useful to add on that; you can follow the links in Freakonomics to see more on that particular example.


  1. Ken K. says:

    Much more common than the scientific bias Levitt assumes he experienced is personal animus. As you say, scientists are people, and the history of good ideas being squelched because the editor doesn't like the author is very long.

    I also really resent and distrust reviewers who insist on cites to their own work. This is as bad (or worse) than authors throwing in irrelevant cites to themselves.

  2. Andrew Gelman says:

    Ken: The personal stuff is bad, no doubt about it. Especially as it gets tied into scientific judgments. I don't like someone personally, so I think he's a sleazy scientist and his work is suspect. Or, worse: I don't like the area this guy works in, so I think he's a bad person. Or, even worse: This guy is an enemy of a friend of mine, therefore he's a bad guy and I think his work is bad.

    But I disagree with you about the references. One thing a reviewer knows very well is his or her own work, and one of a reviewer's most useful contributions can be pointing out the relevant literature to the author of a submitted article. Why exclude one's own work in these citations?

    To get back to Levitt's example: again, I don't know anything about the details, but I could well imagine that his paper was rejected by medical journals because of the presentation style, not because of personal animus or a disagreement about his research findings.

  3. Keith O'Rourke says:

    Following my rant yesterday about nothing really being new (i.e. Stigler's law) – it might be better for reviewers and editors to get more actively involved in "pointing out the relevant literature".

    A simple google search on terms like the phrase "Combining Biased and Unbiased Estimates" – and multiple terms – meta-analysis weights biased estimates – found my biased favourite pre-2001 paper and online abstract ;-)

    Maybe the next time I am revieweing a paper that makes a claim of being new, or an advance or _distinguished from this previous work_ (what the authors actually wrote) I'll do a quick google search and append it the review asking them to do a more _systematic_ search.

    Or suggest the phrase – "this work is distinctive in not being widely discussed in the current liteartaure"

    And for the record, I believe yesterday's paper is a positive contribution and I am glad it has been published and drawn attention


  4. Andrew Gelman says:

    Keith: All I can say is that referees, like bloggers, are doing an unpaid service.

  5. efrique says:


    While I dislike both and try to avoid mentioning my own work when acting as referee, sometimes it's unavoidable. I recently had the experience of acting as referee on two papers at the same time, both with the same pretty basic error (it's a mistake I expect my undergrad students to be able to explain clearly before they finish their degrees), but the only reference I could find that explained the issue directly was one of my own. (I didn't insist they refer to it, though – just that they read it so they could see that this has been out there for some time, understand why their papers were wrong, and – most importantly – fix them.)

    Sometimes it's very hard to avoid self-reference …

    [More annoying than both of those is authors who just can't be bothered to read around their field a bit (or even just shut up and think for 5 minutes), because they're too busy writing yet another paper that refers to all their previous work, and from which all but a few of the original authors their work barely builds on have slowly disappeared.]

  6. efrique says:

    er, my point being that basic ignorance of other related work is even worse than blatant self reference, in my eyes.

    I often find myself exclaiming "doesn't anyone else in this area *read* any more?"

  7. Malcolm Kass says:

    Granted, I am small potatoes, but I worked as a research assistant for a couple of professors and have been told point-blank to not study this for people will not want to hear it. Topic was organic food. Other topics like stats, physics, engineering are far more cut and dry. But other topics like economic behavior and other softer sciences are probably much more malleable and prone to influences outside the science.

  8. Barry says:

    I'd say that after Freaknomics (I and II), perhaps Levitt should stop trying to 'inform' us about anything.

  9. Keith O'Rourke says:

    Efrique – it _can_ be worse than just ignorance. It can be as bad as "lets keep those not _like_ us" out of the literature. Think there is fairly good evidence of this for female authors (hopefully just in years past).

    Andrew is right that reviewers have to choose to spend thier time on what they feel is most important.

    In one of my past reviews I spent a fair amount of effort to get Creasy's 1954 paper included in the references in a paper on ratios from paired data. (My reading of the literature at the time (1954) suggested to me her work was being neglected for apparently no good reason.)


  10. Andrew Gelman says:

    I used to write long reviews, now I write short reviews and put the more general comments on the blog.

  11. Teravel says:

    "I get particularly annoyed with scientists who ignore criticisms that they can't refute."

    Isn't this true for all people? No matter what business you are in, you should be able to accept constructive criticism in order to better yourself.