I received a few emails today on bloggable topics. Rather than expanding each response into a full post, I thought I’d just handle them all quickly.
1. Steve Roth asks what I think of this graph:
I replied: Interseting but perhaps misleading, as of course any estimate of elasticity of -20 or +5 or whatever is just crap, and so the real question is what is happening in the more reasonable range.
2. One of my General Social Studies colleagues pointed me to this report, writing “FYI – some interesting new results, using the linked GSS-NDI data. I trust this study will raise some heated discussions.” Another colleague wrote, “I’m rather skeptical of the result but at least they spelled GSS right.”
The topic is a paper, “Anti-Gay Prejudice and All-Cause Mortality Among Heterosexuals in the United States,” published by Mark Hatzenbuehler, Anna Bellatorre, and Peter Muennig.
My reaction: Yes, it seems ludicrous to me. Especially this:
The researchers wanted to make sure they were really seeing a link between earlier death and anti-gay prejudice — not something else that might be associated with being anti-gay — so they controlled for variables that could have confounded the results, including age, income, education, marital status, gender, religiosity, and even racial prejudice.
I love the way, in a study of “who had died by the end of the study period,” the factor “age” is just listed as one among many control variables. Of course the results will be completely sensitive to the modeling of age. I can’t be sure but it looks like they just included age as a linear factor. Given the huge variation of antigay prejudice by age and cohort, I can’t take this sort of analysis at all seriously. I’d think the American Journal of Public Health could do better than this!
3. Somebody else wrote:
My colleague sent me this ridiculous article but this might not pass the smell test and it might be silly for you to write about an obviously crap paper. But it is published in a peer reviewed journal, and mentioned in a reputable newspaper, plus it is right along the lines of what you’ve been writing about recently. So what do you think?
The news article is entitled, “Physicists are more intelligent than social scientists, paper says,” and continues: “The difference is statistically significant only for physics and political science. But the paper’s co-author, Edward Dutton, adjunct professor (docent) in anthropology at the University of Oulu in Finland, said that the smaller differences between other subjects ‘went the same way’ . . . Dr Dutton admitted that a ‘niggle’ of doubt remained, which required replication with a larger sample to eliminate. However, many data problems that he had anticipated ‘didn’t seem to be that problematic’ when the paper was peer-reviewed.”
A “niggle,” huh? Seems a bit early to break out the N-word, no?
Also this weird bit, “‘Intelligence and religious and political differences among members of the US academic elite’, published in the Interdisciplinary Journal of Research on Religion, draws primarily on a 1967 study of 148 male academics at the University of Cambridge. . . .” Huh?
In any case, sure, I agree that physicists are more intelligent than social scientists, on average. That’s obvious. You don’t need to analyze a 47-year-old survey to tell me that! But the part I really loved was when the author of the study told me he didn’t trust his statistics until they were peer-reviewed! That’s pretty scary.
4. I received the following press release in the inbox:
Converge Consulting Group Inc. Partner Robert Gerst, will be presenting, What Matters: How statistical significance demolishes productivity, competitiveness, and effective public policy, at the 20th Annual International Deming Research Seminar . . .
This year marks the twentieth anniversary of The Bell Curve, a publishing phenomena claiming to provide scientific evidence of significant differences in intelligence among human races. . . . Gerst details the statistical confidence trick of The Bell Curve that so successfully duped the public, leading media and news outlets . . .
Gerst will show how this same confidence trick, misusing statistical significance to mean practical importance, is corrupting: (i) academic research in psychology, biology, ecology, education, health, economics, medicine, (ii) business analysis, including market surveys, customer research, process improvement efforts, operational & organizational analyses, employee engagement research, and (iii) government studies, including public accountability reporting, policy research, and program evaluation.
“Executives and HR departments, for example, use the same junk-science as The Bell Curve, to measure employee engagement and improve productivity, and then wonder why productivity and engagement drop,” said Gerst. “Billions have been spent on, Value Added Assessments in education, Six Sigma programs in business, performance evaluation and accountability reporting in government. They’re playing the same statistical con as The Bell Curve and getting the same quality of results.”