Hurricanes vs. Himmicanes

The story’s on the sister blog and I quote liberally from Jeremy Freese, who wrote:

The authors have issued a statement that argues against some criticisms of their study that others have offered. These are irrelevant to the above observations, as I [Freese] am taking everything about the measurement and model specification at their word–my starting point is the model that fully replicates the analyses that they themselves published.

A qualification is that one of their comments is that they deny they are making any claims about the importance of other factors that kill people in hurricanes. But they are. If you claim that 27 out of the 42 deaths in Hurricane Eloise would have been prevented if it was named Hurricane Charley, that is indeed a claim that diminishes the potential importance of other causes of deaths in that hurricane.

Freese also raises an important general issue in science communication:

The authors’ university issued a press release with a dramatic presentation of results. The release includes quotes from authors and a photo, as well as a quote from a prominent social psychologist calling the study “proof positive.” So this isn’t something that the media just stumbled across and made viral. My view is that when researchers actively seek media attention for dramatic claims about real deaths, they make their work available for especial scrutiny by others.

As a coda that may or may not be relevant to the case at hand, I will confess that I [Freese] have become especially impatient by the two-step in which a breathless set of claims about findings is provided in a press release, but then the authors backtrack when talking to other scientists about how of course this is just one study and of course more work needs to be done. In particular, I have lost patience with the idea the media are to blame for extreme presentations of scientists’ work, when extreme presentations of the scientists’ work are distributed to the media by the scientists’ employers [emphasis in the original].

As the saying goes, +1. The news media are what gets us hearing about these studies (and indeed I’m contributing to it now), but the tabloid science journals such as PNAS provide incentives for researchers to engage in hype so as to get their papers published, and of course once a paper is published, with whatever errors it happens to contain, researchers have an understandable tendency to hang tough and not acknowledge problems with their claims. The underlying statistical issues are tricky, so when researchers don’t see a problem with their work, part of it can be simple misunderstanding of some subtle statistical principles which have only recently been studied carefully in some ways.

I pointed Freese to my post and he replied:

Alas, if only your namesake hurricane had stayed farther south, all this could have been avoided.

Which made me think: Hurricane Andrew was pretty bad, and Hurricane Drew might been similar, but if it had been named Andy it could’ve been a real killer. And if they’d called it Andi . . . well, don’t even think about it.

35 thoughts on “Hurricanes vs. Himmicanes

  1. (1) Your risk-perception colleague said “none of them [blog commentaries] presents a particularly meaningful criticism (just lots of indignation & ridicule)”. There’s a nice analysis by Bob O’Hara at http://rpubs.com/oharar/19171 showing that a nonlinear effect of damage is substitutable for the name-gender effect … (2) I think it’s interesting that the press release pointed to by one of the commenters on the sister blog, http://www.eurekalert.org/multimedia/pub/73875.php?from=268664 , does not include the fourth author, Joseph Hilbe, who is a statistician (and has always struck me as a reasonable/thoughtful person in the stuff of his that I’ve read previously)

    • Oops. I meant to say that the photo doesn’t include co-author Hilbe. The press release does say (at the bottom) that Hilbe was a co-author, and presumably he’s not in the photo since he’s at Arizona State rather than UIUC.

    • I liked the comment at the sister blog, pointing out that it’s no surprise that the paper was marketed well, given that the authors are a student and professor of marketing!

      • A couple of these papers have corresponding authors who are grad students, who might feel that publicizing papers will have no negative consequences and might feel pressured into doing press releases (at places I have been at there was pressure, but we got to go through them before release, and on one I corrected it, and then was was told it is not exciting enough for a press release). Do you think there is more pressure now on people to get their papers publicized than in the past?

        • Dan:

          I dunno. I love publicity and I always have, but I’ve often had trouble getting the publicity I want. For example, in October 1992 I did some calculations and graphs with Gary King, using a multilevel model extending some ideas of Steven Rosenstone and James Campbell, to predict the presidential election state by state, computing things like the probability that the election would be tied, the probability a single vote would be decisive, the probability that Clinton would get X number of electoral votes, etc. But Gary and I had no idea of how to publicize this work. We had to wait almost two decades until this stuff to catch on with the media, and of course by that point, everybody was doing it.

          So I admire people such as these hurricane guys for being able to get attention for their work. What I don’t admire is what irritated Freese, that once legitimate problems were pointed out in their work, the authors didn’t admit it. Sure, they probably don’t know a lot of statistics so they might not have realized at first that they’d made mistakes. But at some point they should have gotten a clue that maybe they weren’t so clueful, and then gone from there.

        • Right, I didn’t realize that. I’m … suprised, I guess he as responsible for the count data modeling as it seems to be an area of expertise of his.

  2. In the abstract they write:

    “We use more than six decades of death rates from US hurricanes to show that feminine-named hurricanes cause significantly more deaths than do masculine-named hurricanes.”

    I looked in the paper + supps for a plot of “name-masculinity” vs “# of deaths” and couldn’t find it. It looks like there were four feminine-named hurricanes that caused high death rates. Did those hurricanes have anything in common?

    http://s4.postimg.org/dli95ne4d/hurr.png

    • Regardless of the way this info was brought to our attention, what do those four hurricanes have in common? That seems to be the important aspect of the data. If it’s obvious to those who study this stuff please say so.

  3. I didn’t feel like getting my own work done the other day, and someone on reddit had posted a link to the data, so I got the original paper and poked around a bit. I found (a) that it was easy to find models that fit better than the authors’ preferred model and in which name-femininity was not statistically significant, and (b) that two particularly deadly female-named hurricanes seem to be driving the whole effect (i.e., if you leave the two most deadly hurricanes in the set out, there does not seem to be any relationship between name-femininity and deaths, even using the authors’ preferred model).

    Here’s more detail in a comment I wrote in a reddit thread (including R code), and here’s the data (xlsx format).

    • “(b) that two particularly deadly female-named hurricanes seem to be driving the whole effect”

      Why do you say two? There are four that caused more deaths than any male-named hurricane. Other than that I agree, this is indistinguishable from a coincidence.

      I didn’t bother to look at their model since the failure to publish a plot of the evidence for their fundamental claim told me no one involved (authors + reviewers) was thinking clearly. I applaud them for including the data so that we can analyze it for ourselves.

      • Sorry I wasn’t clear. You’re right that there are four female-named hurricanes with larger death tolls than any other storms. I fit the authors’ model with data sets excluding all four, the three most extreme, the two most extreme, and just the one most extreme, and name-femininity was not statistically significant in the models with the third or with the third and fourth most deadly storms. So, even though there are four with greater death tolls, the two really extreme ones have to be in the model to get the “effect” in question to show up.

        • Noah,

          What if you drop the four datapoints in question but add in katrina (give it masculinity index of 9 or so). Is a single “outlier” sufficient to get the effect using their model?

        • Hmmm… I haven’t checked, but it wouldn’t surprise me if it’s enough. The number for Katrina is enormous, isn’t it? 1800+, if I’m remembering correctly.

        • Noah,

          I saw it scanning through a link from your reddit post. Without looking at the model at all, I would guess this is just another case of “significant” deviation from an implausible model taken as evidence for a theory incapable of precise prediction.

  4. “I have lost patience with the idea…”
    Absolutely. I’ve said it before and (hotdang) I’ll say it again. We are responsible for communication as well as calculation. Handing over to the marketing office and walking away is not good. And the only thing worse than no media coverage is lots of media coverage (inverted Oscar Wilde for the 21st century). This study has however been the catalyst for a lot of timely introspection on science reporting.

  5. What was especially annoying (to me) about this paper was that their model didn’t even find an association between hurricane-gender and fatality rates! It was only the interaction between gender and severity (as measured by normalized damage and minimum pressure) that was found to be “signficant.” In other words, on average there was no connection between gender and deaths, and for the majority of not-so-severe storms, female names were actually associated with lower death rates. It is only for the handful of highly-severe storms that female names were associated with higher death rates. Since there have been something like 5 highly-severe storms with male names in their “six decades” of data, this effect couldn’t possibly be a statistical fluke. I mean, the p-value was below 0.05!

    • OK, I see Freese’s post hits that point, and goes on to nail the remainder of my objections as well. Much better than most of the other objections and “debunkings” I saw in the press, which hit the correct tone of ridicule but had not looked at the data and the model very carefully.

  6. In the post-publication peer review of this article, the one thing I didn’t see addressed that I was curious about was the use of this predictor “normalized damage” (NDAM).

    It seems like a very informative calculation for catastrophe risk analysts to put past hurricane damages into the perspective of what the damages would be today, given present levels of location specific development, population density, wealth, etc.

    But I am confused as to whether or not that type of predictor makes sense in this study. Right now, I am leaning toward thinking that just putting the reported actual damages at the time of the storm in today’s dollars makes more sense to me.

    My reasoning is that including a proxy for how dense population and development were where the storm made landfall in the model for actual deaths seems to make sense, but including a proxy for how dense population and development in that location, now, does not make sense to me.

    Does anyone here know if someone with expertise in weather events has examined this variable closely?

  7. Emanuel has estimated that damage from hurricanes varies as the third power of wind speed, so yeah, a few strong storms will dominate. Obviously there are other factors such as storm surge and where landfall is made

    In their response linked by Jeremy Freese the authors’ make the Bill Gates walks into a room and the average income increases by a billion dollars argument

    “What’s more, looking only at severe hurricanes that hit in 1979 and afterwards (those above $1.65B median damage), 16 male named hurricane each caused 23 deaths on average whereas 14 female named hurricanes each caused 29 deaths on average”

    Given the over 1000 deaths from Katrina, these people are innumerate.

    • Eli:

      It’s pretty wack that in their letter they summarize the differences with this average and then later write, “it is critical to apply the correct modeling technique when modeling count data such as fatalities. OLS regression is not an appropriate method.” It’s also naive of them to think of fatalities as “count data” given that, as you note, the cases we really care about are when the numbers are large enough that we can’t really get an exact count. But in this case they are making common statistical mistake, which is to let the statistical analysis be driven by the form of the data rather than by the underlying questions.

      • Eli has a couple of suggestions to deal with the press release issue.

        Any grant renewal should have to attach press releases issued by the University about the research and the project should be evaluated on the basis not only of the scientific impact but the public impact stirred up.

        Write to the Communications Officer listed as contact on the press release. Remark about how they have dragged their university’s name into the mud. Point to some of the substantive refutations and maybe explain a bit about why they are substantive. This might affect the result when the same clowns come calling again

        In this case, here is the press release in question

      • Just wondering. Which model would you deem appropriate for this kind of data, “when the numbers are large enough that we can’t really get an exact count”.

  8. Andrew:

    You blame the Journals but what about the other key party generating the hype that Freese mentions i.e. the employers? How many times have we protested to our own University’s PR office or the Dean about a over-hyped or misleading press release?

    Out of all the factors contributing to hype, shouldn’t academics have the most control over the Universities themselves?

    Sincere question to the academics on the forum: How often have you complained about a bad press release by your own institution?

  9. Even supposing that the study is completely correct, why would one think that it would be good to give all hurricanes masculine names? It is equally plausible to think one should give them all femminine names. After all, it’s not actually a good idea to treat every hurricane as if it is highly dangerous. There are other things to do in life than avoiding possible death in a hurricane (including other things that avoid death from other causes).

    • If the study were completely correct, disaster management authorities would presumably want to calibrate the scariness/perceived masculinity of names to the expected damage/danger level (although that might not be well known at the time of naming).

  10. Pingback: Hurricanes/himmicanes extra: Again with the problematic nature of the scientific publication process « Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science

  11. Pingback: Scientific one hit wonders (or, what’s the scientific equivalent of “Tainted Love”?) | Dynamic Ecology

  12. Pingback: The "scientific surprise" two-step - Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science

  13. Pingback: How to Educate Against Pseudoscience? — Joanne Jacobs

Leave a Reply to question Cancel reply

Your email address will not be published. Required fields are marked *