Skip to content
 

Of Beauty, Sex, and Power

Our article has appeared in The American Scientist. (Here’s a link to the full article; hit control-plus to make the font more readable.) I highly recommend it for your introductory (or advanced) statistics classes. We start with a silly story of a flawed statistical analysis of sex ratios that managed to sneak into a serious scientific journal, then discuss general issues of how to interpret inconclusive statistical findings (including a brief analysis of data from People Magazine’s 50 Most Beautiful People lists), and then loop back and discuss the statistical reasons that exaggerated claims can get amplified by the news media.

20096592237373-2009-07GelmanF4.jpg

The article begins as follows:

In the past few years, Satoshi Kanazawa, a reader in management and research methodology at the London School of Economics, published a series of papers in the Journal of Theoretical Biology with titles such as “Big and Tall Parents Have More Sons” (2005), “Violent Men Have More Sons” (2006), “Engineers Have More Sons, Nurses Have More Daughters” (2005), and “Beautiful Parents Have More Daughters” (2007). More recently, he has publicized some of these claims in an article, “10 Politically Incorrect Truths About Human Nature,” for Psychology Today and in a book written with Alan S. Miller, Why Beautiful People Have More Daughters.

However, the statistical analysis underlying Kanazawa’s claims has been shown to have basic flaws, with some of his analyses making the error of controlling for an intermediate outcome in estimating a causal effect, and another analysis being subject to multiple-comparisons problems. These are technical errors (about which more later) that produce misleading results. In short, Kanazawa’s findings are not statistically significant, and the patterns he analyzed could well have occurred by chance. Had the lack of statistical significance been noticed in the review process, these articles would almost certainly not have been published in the journal. The fact of their appearance (and their prominence in the media and a popular book) leads to an interesting statistical question: How should we think about research findings that are intriguing but not statistically significant? . . .

We also discuss Why Is This Important? and Why Is This Not Obvious?

200965854127371-2009-07GelmanF2.jpg

20096596477374-2009-07GelmanF5.jpg

7 Comments

  1. Anonymous says:

    If only we were allowed access to this article for free….?

  2. anonymous says:

    My wife is a nurse, I am an engineer. She is short, I am tall. I am not violent, although I have broken dishes in a tantrum.

    Our respective mothers, and our younger children, think we are beautiful, but I doubt many other people.

    We have two sons and two daughters. How about that!

  3. Alex Cook says:

    The impact factors in the article are perhaps a little dated. Just checked Thomson Reuters and:
    2.5 JTB vs
    2.4 JASA
    2.8 JRSSB
    2.3 Ann Stat

    Also, the article influence metrics of the stats trio are all 3+ times higher than JTB. My understanding is that the influence metric is a better measure of article impact than the impact factor itself.

  4. Sergio says:

    Graphs like this should be standard in statistical packages and econometric software as well.

  5. jonathan says:

    Enjoyable. I have two daughters because my wife and I – particularly my wife – are very good looking. (Ha!) My ugly brothers have all sons.

    The article does a nice job of demonstrating that urge to assign value to results. I have this kind of argument all the time. My dad, who was a radiologist, clued me in on it when I was like 7 or 8, back when LBJ was President. The national news reported a leukemia cluster. When I said that's only a few people, my dad told me that the 5 or 7 was way too high but that it was probably an artifact. After he finished yelling at Walter Cronkite – no, that's not the way it is, Walter! – he explained that we think stuff like rare leukemia would spread evenly but it clusters and if you happen to measure a cluster you get more. Measure again and you might get nothing. He explained there might be a real cause for a clump or it might be chance.

    The kind of errors you write about usually have limited scientific importance – because most work has limited importance and time reveals the useful truths. But they do have important policy significance, which has real cost.

  6. Fantastic article and amazing story. This really should be required reading for anyone who does statistical work in their research.

  7. Sandy says:

    Very interesting, statistics can be very misleading.