Skip to content


Gary Smith sends along this news article from Jason Samenow, weather editor of the Washington Post, who writes:

Three years ago, a scientific study claimed that storms named Debby are predisposed to kill more people than storms named Don. The study alleged that people don’t take female-named storms as seriously. Numerous analyses have since found that this conclusion has little merit.

Good to see the record being corrected. But there’s more. Samenow continues:

When the study came out, I [Samenow] reported on it, and it generated tremendous interest. Because of the Internet and the tendency for interesting articles to continue circulating years later, the 2014 story is still being read by thousands of readers — even though its key results have largely been debunked. . . . just a day after the study was published by the Proceedings of the National Academy of Sciences, critics began poking holes in it. . . . In subsequent months, several study rebuttals were submitted to and published by the same journal that had published it originally . . . In addition to the formal rebuttals published by the journal, Gary Smith, a professor of economics at Pomona College, wrote a critique on his blog, in which he described “several compelling reasons” to be skeptical about the study results. . . . The blog critique was subsequently expanded in an article published in the journal Weather and Climate Extremes in June 2016.

Samenow concludes:

The publication of this study and the hype it generated (and continues to generate) show the potential pitfalls of coming to unqualified conclusions from a single study. It also reveals the importance of the peer review process, which encourages critical responses to new findings so that work that cannot withstand scrutiny does not endure.

Let me just emphasize that what really worked here was post-publication peer-review, an informal process using social media as well as traditional journals.

Good on Samenow for making this correction. It’s excellent to see a news editor taking responsibility for past stories. In contrast, Susan Fiske, the PPNAS editor who accepted that “himmicanes” paper and several others of comparable quality, has never acknowledged that there might be any problems with this work. Fiske lives in a traditional academic world in which all peer review occurs before publication, and where published results are protected by a sort of “thin blue line” of credentialed scientists,” a world in which it’s considered ok for a published paper, no matter how ridiculous, to get extravagant publicity, but where any negative remarks, no matter how well sourced, are supposed to be whispered.

We all make mistakes, and I don’t hold it against Fiske that she published that flawed paper. Any of us can be fooled by crap research that is wrapped up in a fancy package. I know I’ve been fooled on occasion. What’s important is to learn from our mistakes.


  1. Keith O'Rourke says:

    This is my favorite phrase “pitfalls of coming to unqualified conclusions from a single study”

    Though rather than a conclusion “without reservation” it might be better put as “without confidence” even just a promising conjecture.

    Or a conjecture one should not ignore but may chose to disregard after further consideration.

  2. John Whelan says:

    I take this as another example of why posting to the arXiv upon journal submission is good for science. If you rely on formal peer review to catch mistakes, it’s all on the shoulders of a few (maybe only one) referees. If you let the community vet the work, the odds of finding a problem (e.g., in the BICEP2 results from a few years ago) before it becomes enshrined in a published, peer-reviewed paper are considerably higher.

Leave a Reply