Peer review abuse flashback

Our recent discussion of the problems with peer review reminded me of this amusing/horrifying story from a few years ago, when some researchers noticed a data coding error in a published paper

Once it was noticed, the error was obvious:

But the authors of the original paper had that never-back-down attitude. So instead of thanking these others for catching their mistake, they wrote:

When any call is made for the retraction of two peer-reviewed and published articles, the onus of proof is on the claimant and the duty of scientific care and caution is manifestly high. . . . We continue to stand by the analyses, findings and conclusions reported in our earlier publications.

Echoes of the notorious “We stand by Matt” episode.

Perhaps there’s a general rule here: when you feel you have to announce that you “stand by” something or someone, don’t. Just don’t.

My problem is not with the authors of that paper making a mistake—we all make mistakes, and some of those mistakes get into print—but with the authors hiding behind peer review.

Also it reflects a disgraceful lack of interest in their own research topic.

If you think of scientific research as a game where the prizes are publication, renown, and tenured professorships—then yes, by all means, try to bury all criticism and fight so hard that nobody will ever want to mess with you again.

On the other hand, if you think of scientific research as a way to learn about reality, you should be thrilled when your expectations are confounded, when someone points out an error and you have to rethink. Sure, it’s disappointing, but out of such disappointments are new understandings made. Remember the fractal nature of scientific revolutions.

19 thoughts on “Peer review abuse flashback

  1. “If you think of scientific research as a game…”

    yes, seems often to resemble general politics/elections game playing

    “Claim Everything, Concede Nothing, Until the Last Vote is Counted– and
    then Holler Fraud!”

    — George Washington Plunkitt (of NYC’s Tammany Hall)

  2. http://people.ucalgary.ca/~kibeom/Anderson%20Ones/ReplyII.pdf is cool, posting a p <10^-76 value against the "no irrefutable proof" claim:

    “Irrefutable Proof” Finally, we note that Anderson and Ones (2008) use the phrase “irrefutable proof” in four places in their article, arguing that our evidence did not constitute irrefutable proof of a clerical error in the scoring of their data. We maintain that our evidence does indeed constitute irrefutable proof (recall, for example, the value of p<10-76). However, we also suggest that even if researchers were to believe that the evidence of errors in their data were not “irrefutable,” their sense of scientific responsibility should still compel them to retract their articles whenever they learn of evidence that casts such profound doubts on the integrity of those data.

  3. > On the other hand, if you think of scientific research as a way to learn about reality, you should be thrilled when your expectations are confounded, when someone points out an error and you have to rethink. Sure, it’s disappointing…

    I can see the direction you’re arguing, but I’m not normally “thrilled” and “disappointed” about the same thing in life. It hardly seems fair to _expect_ people to feel such a difficult combination of emotions.

    • A fully scientific community requires individuals to get over personal disappointments/egos so that we can all get less wrong together and quickly as possible.

      Pleading hardship is no defense in science ;-)

  4. Hiding behind having made it through peer review once to avoid further scrutiny is like claiming we can’t judge Robert Durst of OJ Simpson because they squeezed through the system once. If your best defense is essentially that double jeopardy should protect you…. you probably did it.

  5. Do I understand correctly that a putative data entry error was noticed 5 years after publication, then 5 years later the correction/retraction process was finally wrapped up (apparently not to the satisfaction of those who noticed the error)?

  6. If the cost of a mistake is so high than people are going to try and get out of it. Perhaps we need to reduce the cost of making mistakes by not *retracting* their original paper but *amending* their paper – which would involve another (smaller) publication in the same journal around fixing their mistake. Than they’d get two publications – one an amendation so it’s not completely fault free. And maybe it could have a name like “An amendation on the paper…” (with different names based on the degree of wrongness e.g. tweak, rectification etc).

  7. Prof, have you come across this thread?

    http://www.econjobrumors.com/topic/new-family-ruptures-aer-nber-is-rip-off-of-obscure-paper

    Long story short, the authors of a forthcoming AER paper claimed novelty of their research question and identification strategy, while both have already been addressed in the literature before. The authors of the AER paper “conveiently” overlooked the prior research that might have made their claim of novelty look ridiculous.

    Waiting to hear your take on this.

    • ^This one is quite outrageous. They completely ignored the pre-existing literature of nearly identical studies. Then they were cited in Barack Obama’s “Economic Report of the President”.

    • Anon:

      Twice people have pointed out fatal errors in my own papers. In each case I thanked the person for finding the error and I published a correction in the journal. It’s not hard at all!

        • Rahul:

          Don Green retracted his paper with Lacour on persuasion, and in a much more embarrassing situation! Now, you might say that Don did so only because he had to, but as we’ve seen in so many examples, people dig in and refuse to retract, all the time. Don admitted the error, and the field of political science moved on.

          And there was another example I think I remember, from psychology, where someone, maybe it was Simonsohn or Nosek or Wagenmakers, conducted an unsuccessful replication of a study, and the original authors accepted that their first finding had been spurious.

        • I don’t think Don Green’s retraction counts as a good example here for the typical situation. The guy had hardly any choice. He was cornered.

          It’s sort of like the CEO’s who after some unsavory internal incidents, are given a choice by a generous board to voluntarily step down “to spend more time with their family” rather than being publicly fired.

          In hindsight Don’s retraction was rather clever and well-timed because it did deflect most of the blame to Lacour successfully & Don Green came out relatively unharmed as the poor innocent trusting Prof. who was duped.

        • Rahul:

          Sure, Don was cornered. But so were Anderson and Ones (the authors of that paper discussed in the above post), so was Matthew Whitaker, so was Ed Wegman, so was David Brooks, so was Laurence Tribe, so was Mark Hauser. For that matter, so was Michael Lacour. But none of them ever admitted any error. I have no idea what went on behind the scenes in the Lacour and Green paper, but I think it’s good that Green retracted, and I think Lacour should’ve done the same.

        • Jokes apart, I think how one reacts depends on a lot of variables including how senior in your career you are, tenure / non-tenure, number of papers you publish in a year (e.g. I suppose someone publishing 20 papers a year reacts more relaxed to a takedown of one paper), the culture of a field (e.g. Comp Sci & Math seem to take criticism far more gracefully perhaps because it tends to be more objective), the academic culture of a country etc.

  8. Prof Gelman, I agree with you, but it seems to me that you are too quick to assume that “challengers” are correct and that the original paper is wrong. Just because someone claims to have found a problem with a paper does not mean they are correct. Sometimes they are, sometimes they make a mountain out of a molehill, and sometimes they just don’t understand the paper and they incorrectly replicate the paper and interpret the different results as proof the original paper is wrong.

    I agree that the profession is wrong to treat any published paper as absolute truth, and that we can always use more replication and more discussion post-publication, but I think we need to take a balanced view of this difficult issue.

    • Jack:

      It depends on the case. The example given above seems pretty clear, especially since the original authors did respond and they had no explanation for the data pattern (other than the obvious answer that it was a coding error, but which they refused to consider). People like those authors, or the ESP guy, or the power-pose researchers, or the himmicanes researchers, etc., who refuse to admit that they might have made a mistake . . . these people poison the well for everyone.

      • Yes, I agree that in many cases it seems clear the original findings are either wrong or much weaker than claimed, and refusing to discuss them is bad behavior–and ad hominem responses are even worse. But the incentives are badly aligned. No surprise they don’t fess up

        I just want people to remember that claims of errors should themselves be subject to scrutiny. They don’t get a free pass.

Leave a Reply

Your email address will not be published. Required fields are marked *