Jeff Lax points us to this news article by Carolyn Johnson discussing a research paper, “Firearm legislation and firearm mortality in the USA: a cross-sectional, state-level study,” by Bindu Kalesan, Matthew Mobily, Olivia Keiser, Jeffrey Fagan, and Sandro Galea, that just appeared in the medical journal The Lancet.
Here are the findings from Kalesan et al.’s article:
31 672 firearm-related deaths occurred in 2010 in the USA (10·1 per 100 000 people; mean state-specific count 631·5 [SD 629·1]). Of 25 firearm laws, nine were associated with reduced firearm mortality, nine were associated with increased firearm mortality, and seven had an inconclusive association. After adjustment for relevant covariates, the three state laws most strongly associated with reduced overall firearm mortality were universal background checks for firearm purchase (multivariable IRR 0·39 [95% CI 0·23–0·67]; p=0·001), ammunition background checks (0·18 [0·09–0·36]; p<0·0001), and identification requirement for firearms (0·16 [0·09–0·29]; p<0·0001). Projected federal-level implementation of universal background checks for firearm purchase could reduce national firearm mortality from 10·35 to 4·46 deaths per 100 000 people, background checks for ammunition purchase could reduce it to 1·99 per 100 000, and firearm identification to 1·81 per 100 000.
And here’s their key chart:
Johnson queried a couple of gun control experts who expressed skepticism:
“That’s too big — I don’t believe that,” said David Hemenway, a professor of health policy at the Harvard T.H. Chan School of Public Health. “These laws are not that strong . . .”
“Briefly, this is not a credible study and no cause and effect inferences should be made from it,” Daniel Webster, director of the Johns Hopkins Center for Gun Policy & Research wrote in an e-mail.
I credit Johnson for writing a skeptical story and her editor for giving a skeptical headline.
She also got this quote:
“What I find both puzzling and troubling is this very flawed piece of research is published in one of the most prestigious scientific journals around,” Webster said in an interview.
I too find it troubling when a very flawed piece of research is published in a prestigious scientific journal, but at that point I no longer find it puzzling. I accept that scientific journals—especially the most prestigious scientific journals—loooove the publicity. Here’s the Lancet’s home page right now:
As long as the politics are right (which will depend on the publication; what is suitable for the Lancet might be much different from what works in an econ journal) and there’s “p less than .05,” it’s all systems go.
But wait one second. Is the Kalesan et al. paper really a “very flawed piece of research”?
Based on the news article and a quick glance at the paper, I’d say yeah, it’s a joke. A regression with 25 predictors and 50 data points? I mean, sure, nothing wrong with looking at the data but please don’t take this as anything more than vaguely suggestive. Just by putting it in a high-profile journal you’re giving the analysis more weight than it can bear.
But then I looked that author list more carefully . . . Jeffrey Fagan! Sandro Galea! I know those guys! Jeff is a collaborator of mine and, while I’ve never worked with Sandro, he always struck me as a legitimate researcher. Did they really sign off on this? Maybe there’s something I’m missing here. I’ll send them an email and see what they say.
P.S. I could’ve emailed Jeff and Sandro first before posting, but in this case I think it’s fairest for me to post first and ask questions later. Then if it turns out I really did miss something important, my earlier mistake will be out in the open.