No, I don’t believe that “Reduction in Firearm Injuries during NRA Annual Conventions” story

David Palmer writes:

If you need yet another study to look at, check this out: “Reduction in Firearm Injuries during NRA Annual Conventions.”

<<1% of gun owners attend a convention, and gun injuries drop 20%.

Sounds fishy to me. It was published in the prestigious, practically-impossible-to-publish-in New England Journal of Medicine . . . but (a) these super-selective medical journals are so selective, that whatever they do publish, is a bit of a crapshoot, and (b) medical and public health journals seem to have a soft spot for research that has a (U.S.-style) liberal story (for example here and here) or (c) maybe just anything that can grab a headline (for example here).

Here’s the summary data from this gun injury paper:

And here’s the adjusted version:

The standard errors in the two plots are different but the point estimates appear to be identical. I assume there was some mistake in the preparation of one or both of these graphs. In any case, the standard errors hardly matter as you can see the variation from point to point. I’d just like to see each year and each day rather than these aggregates.

The real problem, though, is what was identified by Palmer, above: the idea that a sequestering of less than 1% of gun owners would lead to a 20% drop in gun injuries. Much more direct to observe that the rate of gun injuries varies a lot from day to day, and from week to week, and from year to year, for all sorts of reasons—and those sources of variation don’t happen to cancel out, over the 9-year period of this study.

What I’d really like to see are the data for all 365 or 366 days of every year, and then we could do this sort of analysis. In the case of the gun injuries data, I don’t know how hard it would be to get these data. Once they went to the trouble of getting the numbers used in the article, do the data for all the other days of the year just come for free? Or would you need to do an equal amount of effort for every new day in the dataset? I guess it would help to get a sense of how much it would cost to put together the data for every day of the year.

In the meantime it would be helpful for the authors to share all their raw data. I know this isn’t standard practice—I don’t share the raw data for most of my papers, not because I’m trying to hide anything but because it would just seem like too much trouble. We should just be all moving toward a norm of full data sharing. In this particular case, the key data are something like 189 numbers (9 years x 7 weeks x 3 days during the week); that would be a start.

The journal article (and also news reports) also picks up some interactions, but given that the main effects are iffy, I don’t think we need to even really think about the interactions at all. I don’t believe any of the standard errors either (there’s some super-complicated regression procedure with many different knobs to be set, but really there are only 7 independent data points per year, as can be seen in the above graph), for that matter.

A quick search turned up this news article from Scientific American which said, “The design of this study only identifies associations, not precise cause-and-effect relationships, and so is unable to ascertain that the observed injury drop on convention days came about because NRA members are not using their weapons. But several study details support this explanation. . . .” So I think I should clarify something. Yes, correlation does not imply causation in this sort of observational study. But, beyond that, the correlations themselves are not so clear, and the effect sizes just don’t line up.

You can look at this another way. Suppose that you might expect the NRA convention to actually be causing a 1% decline in gun injuries. Then you can work through a design analysis and find that you’re in the notorious “power = .06” situation, i.e. the kangaroo problem. Your study is dead on arrival.

That said, I could be wrong. The research article does give some reasons why such a huge effect could be possible. And there’s nothing wrong with opening up some data and publishing what you’ve found. The analysis has a bunch of iffy steps, but that’s ok too, as long as you share your raw data so other people can do their own analyses. In this day and age, there’s no real excuse for not sharing the data. Again, I haven’t been always so good at making my own raw data accessible, so I say all this not as a criticism of the authors of the above paper but rather as a suggested step forward.

To return to a common theme: Publication of interesting data is a great thing to do. The mistake is to take a data pattern too seriously, just because:

(1) Some researcher somewhere did some analysis resulting in “p less and 0.05,” and

(2) Some legitimate journal somewhere happened to publish this result.

We know now that (1) there are lots and lots of ways to get “p less than 0.05” from noise alone, or, more precisely, from patterns that are so highly variable and context dependent to be essentially uninterpretable given available data, and (2) journals publish all sorts of iffy claims all the time.

So let’s try to move on, already. Please. My point is not to slam the authors of the above article, who put in some hard work in data collection and analysis and were surely doing their best. My problem is with the system of scientific publication and publicity, in which these sorts of speculative analyses are uncritically hyped

86 thoughts on “No, I don’t believe that “Reduction in Firearm Injuries during NRA Annual Conventions” story

  1. Hi Andy,

    Here’s my hesitation on publishing raw data. Sometimes it takes me an enormous amount of effort to get the raw data and I’m planning multiple articles out of it. If I publish the raw data for the first article then I’m giving all that effort away for free to others who could then scoop me on the next articles. I get that that would be good for science but the fact that it would be bad for me would give me hesitation, especially being pre-tenure.

    – Sarah

    • Sarah:

      I agree this is a problem, not just the concern of losing future publications but, even simpler, the effort it takes to put together a clean dataset. My proposed solution is to change the incentives by changing the conventions of citations. The idea is that you could publish just a dataset, or just an experimental design, then other people who use your dataset or your design can cite you. Currently journals like to publish papers with splashy findings. If instead they’d publish interesting designs, or interesting datasets, on their own, we could have the best of all worlds, I’d hope. Less incentive to exaggerate claims, more incentive to share data.

      • I love this idea! Especially since researchers choose projects based on the importance of the question, the quality of the data and design and the feasibility of the project – not on the results. Maybe it’s time to start your own journal, Andy.

      • I like this idea a lot, but are citations a sufficient incentive compared with extra publications. If someone scoops you on an idea, you trade one publication (and potentially all the existing work that went into it) for one citation. If you really want to change things you would want to somehow upweight the data citations (just need to convince google scholar).

        • I think he means that the design+data would itself be a separate publication. This is actually what happens when data is archived at e.g. ICSPR, people cite the data.

    • Well, first of all you should want other people to replicate your work. Besides being a crucial step of the scientific method, it should give you confidence you are onto something.

      Second, what reason does anyone have to pay attention to you? Apparently you want to prevent them from double checking your analysis… Did you describe the data collection/curation process well enough so that someone else can repeat the entire thing and get the same final dataset? Did you make some prediction about the future they can check?

      If no one cares about any of that, are you sure you are in a scientific field at all?

      Also, who funded all this work you are hesitant to give away “for free”? Did you fund it yourself or did someone else pay you to do it?

        • Thanks, Corey! The answers are obvious, especially if you know me. I appreciate you recognizing the incentives and not my character are the problem. I also appreciate you using your real name.

        • I have found myself in similar places regarding public release of data. I intensely dislike the practice of holding data “proprietary,” as it sounds a lot like the “national security” label that is too often applied. In the case of this particular study, I don’t really see the defense of the data as proprietary (there are appropriate circumstances, but they are rare in my opinion). I have toyed with the idea of declaring a personal policy that I will not work on a consulting or research project unless the sponsor agrees to make the data publicly available. Alas, it is hard to stick to such a policy when the risks are clear and the upside potential obscure. So, I understand your position, Sarah, but as I hope you realize, it is not that simple. We still need to live with ourselves. Citing the poor incentives does not excuse us from having to take responsibility for our own actions. I am not comfortable with some of the work I have done that has withheld data from public scrutiny. And, I think I should, at the very least, remain uncomfortable with that fact. On the other hand, when I have insisted that data be made publicly available, I have felt good about myself. The tension is a personal ethical choice between the incentives we live under and our own beliefs. That is as it should be – but Corey’s dismissal of responsibility for people following the incentives is too easy for me. It should be harder than that.

        • If you recognize the effort is corrupted, why keep partaking? All your efforts should then go towards fixing the incentives, which isn’t what science-minded people like doing at all. Also, if you do happen to do something useful despite the system, it will still take credit for it. I don’t see anything improving that way.

          Afaict, the best response is to figure out a way to fund and disseminate your research outside of this apparently anti-science ecosystem. Eventually it will be starved of talent and funds… I don’t think it will die out though, I’d expect it to live on in a less-influential state like the current catholic church.

      • Dear Anoneuoid,

        I am disinclined to engage with someone who immediately comes out swinging and does so anonymously. If you would at least identify yourself, I’d be happy to respond to your questions. In the meantime, I assure you that you are winning no admirers by assuming the worst of me and/or my field which I suspect you know, hence not identifying yourself.

        I hope you have a good day.

        Best,
        Sarah

        • I’m not trying to troll. I just think the logical conclusion for someone who cares about science after answering those questions for themself is to either

          1) Fight to change the incentives so science can get done (boring)
          2) Go do something else (hopefully allowing you to do science)

          I don’t see how making the choice to participate in a little “white pseudoscience” helps anyone.

          Anyway, those are the conclusions I came to. Luckily for me it was quite early and I had managed to develop other skills, so I didn’t get attached to academia to the point I was too scared leave (which I could have seen happening after only a few years more).

      • I frequently search around to see if I can find already-published data that reflects a design that will answer research questions I’m working on. It’s great for such data to be available. With that said, if the data were collected for that specific purpose and the designers of the data collection are actively working on analyzing the data for that purpose, I don’t think they should be made to publish those data prior to having a reasonable shot at publishing based on their own data.

    • N of 1, but I have been publicly posting datasets for about 5 years now, and no-one has scooped me. I agree it is possible, I’m just not sure how prevalent a problem that really is.

      I agree with Gelman it is a pain though. Right now there is no incentive really, so although I think it is the right thing to do I debate whether I should bother or not. I’m skeptical that citations to open data repositories (or specific data sharing journals/articles) will change this in the long run, but I agree it is the easiest step given the current system.

      Commonly with complicated analyses I want to see the code just to figure out exactly what the authors are talking about. Often that can be accomplished via simulated data.

        • Not all of the analyses I conduct are posted — most often if it is sensitive data I cannot post it and that is one of them. (As I said though, if you wanted to just see exactly what analysis I did I can create a simulated dataset and share that and the code.) Other times I am not the proprietor of the data, so I cannot post it (like when a police department gifted me crime data), but I can often forward to the appropriate people if another researcher is interested obtaining the same information.

          It is easier to send me an email about specifics, I’m concerned if I post links to all the papers here my comment will go to the spam filter. I have about 10 different datasets from both current and published projects if I counted correctly for whoever is interested. Most of the time I post a link in the introduction of the paper on SSRN with the data and code if I can share the information. I should probably put that on my CV under the specific papers though…

        • I think this underscores some of the impediments to making data more available. I applaud your openness to trying to do it and willingness to respond to people’s requests. Clearly these are not universal responses. However, the proprietary, sensitive, and ownership issues of data are more the rule than the exception. Similarly, many journals now have policies requiring the data to be made available – but, as they say, “what the big print promises, the small print gives away.” Many times (too many) I have clicked on the links to the data from publications in these journals only to see a pdf informing me that the data is proprietary and cannot be released.

      • In my field (invertebrate paleontology), it has become required that you post your raw data for many journals in the field. As far as I know, I don’t think this has led to many instances of people scooping each other because (1) it’s not terribly likely that you’d drop your research program and spend months analyzing someone else’s dataset, and (2) even if you were inclined to do so, the field is small and everyone knows each other, so it is more likely that you’d collaborate with someone who has posted interesting data rather than trying to scoop them.

    • I think the obvious compromise position is you publish/share only the aspects of the data that are needed to recreate the published analyses. If the entire next article is encompassed in that subset of the data, I would have to think the data are being recycled too much anyway.

      • I do not agree with this. It was a matter of some contention in the SPRINT Challenge held last year by the NEJM. I asked for some data related to something I discovered but was told that it would not be released since it was not used in the original study. To make matters worse, I found another publication by the same authors (in another journal) using the very data I was asking for. To make matters worse, I wrote the editors of the NEJM a letter complaining about that practice – particularly in the context of a Challenge meant to explore the benefits and costs of open clinical trial data. My letter was never answered.

        It is true that reusing data can lead to misuse. It is also true that collecting high quality data is expensive and should be rewarded. But to circumscribe who can access what data for what purposes and at what times is to focus on the symptoms of a dysfunctional system. Fix the incentives to begin with and these problems disappear. If we don’t fix the incentives then there will be no satisfying compromise between those that want the data and those that have it.

        • +1 huge quantities of data are collected under government grants. In my opinion all the data should be released to archives *or the grant should be repaid with substantial penalties and interest* just like if you hadn’t paid your taxes.

          Even data collected under university start-up type funding… most university funding’s real source is the overheads they’re getting on grants. The administrative definitions are harder to determine but still this should have a similar requirement.

          Only funding 100% entirely from private donations and private foundations should give you the right to treat your data as proprietary.

        • I agree, with the caveat of strong protections for sensitive data that identifies individual people.

          It is also actually happening, at least in some contexts:

          https://www.usaid.gov/sites/default/files/documents/1868/ADS579FactSheet%202015-02-13.pdf

          “The policy notes that all USAID operating units, including its worldwide missions, must ensure that USAID-funded data is centrally cataloged and made available to the public by default, with limited exceptions.”

        • I’m not sure of the full context of the SPRINT Challenge you’re mentioning, but let me clarify the kind of situation I’m thinking of. Let’s say I am a new assistant professor and use my startup research funds to run a survey that covers several parts of my research program that I intend to publish on. There’s no doubt, if I worked for a public university, that the funds are public and the resulting research products should become accessible to the public. At the same time, the funds were granted to me because the steward of those public funds thought I was uniquely qualified to carry out this research. In my view, I should be able to take the first whack at it over someone who doesn’t see the forest for the trees as they look at the open data release for the first of 2 or 3 planned publications.

          I think a somewhat similar argument could be made in the case of government grants: The grant was given to a specific person/group not just for the design of the data collection, but the ability to write up the results by placing it in a particular theoretical/historical context, analyzing the data in a particular way, etc. You could argue that maybe some of our grants should be issued to researchers who are essentially experts at data collection rather than analysis, but that’s not how we do it.

          I do of course think all the data should become public at some point, but I just think we should consider — in appropriate situations — giving researchers the right to the first shot at publishing about the data they collected. The devil, of course, will always be in the details.

        • Again, you’re just focusing on the reality of todays misaligned incentives. Why should you have “first shot” at analysis? It’s because implicitly you’re considering the situation we have now where “second shot” doesn’t ever get published or does so in a way that is considered hardly worth the electrons its printed on, and in that way you lose out on promotion, tenure, and the ability to garner additional grants.

          But the real problem is: that academic promotion and granting situation which is broken in the first place. In an academic situation where whatever you say, having done a good job at it, and said it well is considered valuable and fodder for promotion and further grants etc… whether or not it’s the first, second, or 13th independent look at the issue… provided it has some added value that moves the field forward… that would completely eliminate the issue you are focusing on, and enable an environment where simply collecting the data and putting it in an archive would be a viable economic choice.

          Trying to reserve data for use by people “first” so as to retain the economic value to the collector… is a broken hack on a broken system.

        • I don’t think it’s such a crazy idea that people who are most familiar with the dataset should be the first to write an analysis on it.

          Should follow analyses be considered? Sure.

        • It’s actually by their writing and documenting what they do that they make the data interesting to other researchers. All data should be publicly archived eventually, and the data in your analysis for any specific paper in question, yes. If you are planning a book or 3-4 articles coming out of a research project, frankly publishing the data too early can be the opposite of helpful to everyone. Some of the difference in this discussion are, I think, disciplinary because the research process varies so widely.

        • I hear this term a lot in survey research and I suspect it applies in many other areas as well: “fit for purpose.” Are the data fit for purpose? Many times, data that were not collected with our research problem in mind are still fit for our purpose. Still, the best way to get data that are fit for purpose is to design and collect the data yourself. If I collect data that is meant to be fit for my purpose, but a marginally faster researcher publishes about a slightly different purpose from mine, I’ll have little hope of publishing for the data’s original purpose unless there is a very clear contrast that I can draw (faster researcher did something important wrong!).

          This says something about the scholarly communication, but it’s not all bad: We discourage “salami slicing” because it confuses the literature. This isn’t quite that, but I’m not sure how helpful it is to have many people doing a similar analysis of the same data (which I think is the result of a system where being “scooped” doesn’t mean anything) and trying to publicize these similar but not same findings.

          But at any rate, I think you may be overestimating our differences of opinion, here. I’ve been deliberately vague on this point, but I think there should be a time limit on this “first crack” idea I have. I don’t like the idea of data sitting on a lone researcher’s computer indefinitely, never reaching the public unless the researcher publishes from it and then shares it. I’m not sure what “enough” time is, but I would advocate for agreements that require the data to be made public after a reasonably short time (say, definitely not longer than 5 years under most circumstances that I can think of and perhaps substantially less time than that).

          And while I’m not fully on the pre-registration bandwagon, the original “owner” of the data can perhaps pre-register whichever papers are anticipated to result from the data as part of the approval for the funding. It can then be understood that, even if the data are shared in advance of publication, that the researcher who has pre-registered a given set of analyses will have unofficial “dibs” on publishing them…if for no other reason than because this researcher will be the only one who can wield the credibility that comes along with pre-registration.

        • I’ll give an example of how wrong your attitude is in terms of destroying the real value of scientific data collection. A while back I discovered that some people had been instrumenting elephant seals and recording their travels, including their dives. Now, they wanted to understand feeding behavior, mating behavior, migration behavior, etc.

          *I* on the other hand, wanted to understand the physiology of decompression sickness, and so I saw that their publication was published in a journal that “requires” open data, and I sent an email asking for the data. Well of course they just refused to give it, because they planned a whole bunch more analyses on migration patterns and feeding behavior and whatnot and they didn’t want to get “scooped” even though of course their original publication in whatever (Maybe a PLoS journal I can’t remember) specifically required them to release the data. Furthermore they wanted their names on any publications coming out of their data, whether they were doing any of the modeling and analysis or not, and they wanted basically the opportunity to “shop” their data to other researchers in the field who would offer them better “terms”.

          I didn’t want to fight it because I don’t have that kind of time, or money, and they stopped answering my emails, but in the end *we don’t have any research on decompression sickness using that data* and it might well be that such research would have hundreds of times more important consequences than research on the feeding patterns of elephant seals. It’s the provincial guild-like nature of their financial calculus that is keeping this data out of the hands of the public *WHO PAID FOR IT*.

          It’s only one example like that. I’ve had several. The fact is that science is broken in many ways, and progress is stalled due to huge silos of proprietary data paid for by the public and locked away so no-one can get ahold of it without the permission of the gatekeepers.

        • Why all the fuss about who gets to analyze/publish first? Clearly it matters – but that is because the system is broken to begin with. We should be giving grants out for collecting high quality data and we should be rewarding researchers who do that. It is the inadequate recognition of this work that leads to the belief that there is value in giving the data collectors the “first” crack at analyzing it. Sure they can analyze it, but why not anyone else? I thought it was pursuit of knowledge that was the purpose, not building careers by excluding others.

        • I find this attitude a bit strange.

          Giving the data-collectors the “first crack” seems like a very reasonable compromise. It opens up other researchers to look at the data, while allowing for the fact that young researchers *do* need to make sure they get a job instead of being forced to leave academia for trying to fix the system.

          If you think the system’s so broken that no one will ever care about the follow up analysis, then surely you also believe the system is so broken that next to no credit will go to the team that decided data A will be important and thus just collected it…without trying to demonstrate *why* they thought it was important.

        • Knitter: Hi I’ve got this great idea for a knit sweater, would you pay me $100 to knit it?

          Sponsor: Sure, it sounds cool.

          …Later…

          Sponsor: So, where’s that sweater?

          Knitter: Well, I’ll give it to you eventually, but you know, first I had to take it around to the knitting conventions and try to get a lot of praise from people, and then maybe use it to get a job designing patterns for further knitting, and it started to get a little tattered, and then at one point I packed it up with my other convention stuff, and now I don’t know where it is and it’s a lot of work to go find it and so sorry you can’t have it.

          Sponsor: Dude, I paid you to knit me a damn sweater.

        • Sorry, I think that’s going a bit far from just saying “let the data collectors get the first analysis”.

          *If it can be*, data should be released with paper.

        • It’s easier to imagine the analogy when you’ve experienced it multiple times while trying to pry data out of locked up “effectively proprietary” sources where public funds are treated as if they were a gift to the researcher who can do whatever they like with the data.

        • I mean, I’m sympathetic to issues of not sharing publicly funded data. On one project I’m on, the Program Manager didn’t want to share the data with the *sponsor* until we got enough papers out (we’ve already got 2). Both the PI and I thought this was nuts, so we just shared against the PM’s wishes. Similarly, on another project I’m on, the PI wants to publish a paper on a statistical method…without releasing the code. This has nothing to do wanting to hide the code, but rather the mountain of paperwork required in releasing the code. I’m fighting pretty hard to release the code (are you really contributing to science if you don’t share your code? Or just bragging?), but we’ll see how that pans out.

          With that said, trying to restrict data collectors from analyzing their data before releasing a paper still seems like overkill to me.

        • The fact is that a policy that says “as soon as a publicly funded dataset is collected it needs to be archived publicly” is just *way less gameable* than other policies. In my experience “mandatory data sharing policies” from journals aren’t worth the toilet-paper they’re printed on unless in fact the journal refuses to publish the article until the data is in fact verifiably placed onto a public archive location.

          So, my basic principle is *make this impossible to game* because it’s too important to let any gaming go on and we already see LOTS of gaming going on.

        • Let me be clear (and I wrote about some of this in another comment). While I think credit is important because of how the system works, my argument that a data collector should have — under certain circumstances — the right to first publication is not solely for the sake of credit. It’s because my own view is in many if not most cases (at least in the social sciences), the best scientific analysis is likely to produce a specific type of data collection/study design as well as the interpretations thereof. I want to make sure the researcher, if he/she wants, has the chance to control the motivation, design, and analysis of their research problem before others get their chance to contest it or find other insights.

          I see it as a communication problem. I think doing the data collection should give you the right of first refusal (with serious limitations, especially on timing) to make your case about what the data say about your research area.

        • This seems to ascribe special status to “first in time”. As Andrew likes to point out in the “time reversal heuristic” what matters isn’t what comes *first* it’s *who does the most convincing job of analysis*

          So, from the perspective of that, the person who designed the experiment should have the opportunity to comprehensively understand it and give a very convincing analysis. Whether it comes “first” or not is totally irrelevant from a scientific perspective… But of course *not* from an academic “chits” perspective. So I still think the basic assumption is that the brokenness of academic “chits” is inherent in science and you’re giving it preferential status rather than considering the possibility that we need to comprehensively change how academic hiring, and promotion work, so that even if you’re giving the 13th analysis of some issue, if yours is the best one done yet, it “counts” in your favor.

        • I think you are overlooking the fact that the collector of data is likely to have the first opportunity to publish analysis of that data just logistically. I’m not advocating that they be forced to release the data as soon as it is collected. But I am saying that they should have to release it as soon as it is published. Perhaps sooner – as a referee, I have at times given editors a referee report that said I would prefer to see the data since I had some concerns that could not be answered without that. The whole refereeing process – and whether referees should have access to the data (and whether they would actually use it) is a somewhat different, but related issue. So, operationally, I would advocate for data release certainly upon publication, but in any form that might influence decisions.

          Since I am an economist, I am most familiar with public policy work. Much research is aimed at influencing public policy – either on behalf of an interested party, or just as an expert attempting to influence what they believe to be the “right” course of action. In that event, I think they should be obliged to release the data when they circulate their work, whether it be working papers or publications.

    • On way around this is to make your data available with an embargo: you cannot use variables X, Y, Z as outcome variables until I have published my papers (etc.), but you can use the data for other purposes.

  2. My understanding is that conventions don’t just reduce the time that attendees have access to guns (since the conventions are gun free), but that local shooting ranges close down so managers can attend. This does suggest a natural prior/structure for the size of the effect, if they could measure how many closures there are. I don’t see any reason to deviate from a simple proportional model (every person-hour devoted to shooting comes with a constant risk of injury).

    • Rjb:

      Yes, the authors discuss this hypothesis in their supplementary material. And, you’re right, if you want to look at the effects of closing shooting ranges, you’d want to look, not just at total number of accidents, but at number of accidents at shooting ranges. Anything like 20% of total accidents seems way too large to me, but I’m no expert on shooting ranges. Perhaps, for example, 40% of all gun accidents occur in shooting ranges, and half the shooting ranges are closed during those days. Then I guess the numbers could work out.

      This is consistent with the general theme that, rather than look for “reduced form” or aggregate estimates, we’re better off getting granular with our questions and our data.

      • You would imagine that this mechanism would be largely in the “unintentional firearm injury” category. The authors could at least test it by disaggregating the intentional from unintentional injuries and comparing those trends. Not saying that would be better, but given they have the data (and they discuss that mechanism), this would be one way to test the mechanism.

  3. I went down a rabbit hole with this paper last weekend.

    The paper reports that around 81,000 people attend NRA annual conferences.

    The paper reports that 33,594 firearm deaths occurred in the U.S., 65,106 intentional non-fatal firearm injuries, 461 unintentional firearm deaths and 15,928 unintentional non-fatal injuries. Adding that all together, we get 115,732 firearm injuries or deaths. If we assign 20% of that to NRA conference attendees, it implies that they are responsible for 23,000 firearm injuries/deaths per year. On that view, 28% of NRA conference attendees cause a firearm injury every year.

    If we charitably assume that they are responsible for all the unintentional non-fatal injuries, it leaves them responsible for a minimum of 7,000 intentional injuries (9% of NRA attendees) or a minimum of 500 deaths (0.5% of attendees). If we more “realistically’ assumed that their firearm injuries were representative of all firearm injuries it would imply that 4,000 NRA attendees commit suicide, 2,400 of them commit homicide and 13,000 of them intentionally shoot someone non-fatally. In other words, 24% of NRA conference attendees commit homicide, suicide or assault with a deadly weapon every year. If we assume attendees come back every year until one of these things happen, the majority of attendees will be dead or in jail after three years.

    Even these lower estimates are rather incompatible with the paper’s other finding that firearm crimes are not affected by NRA conference attendance.

    If we assume that that finding is correct, that implies that the NRA attendees are responsible for 59% of suicides, and unintentional firearm injuries. It also implies that 13,300 NRA attendees commit suicide every year or 16% of the total. This implies that after four years a majority of NRA meeting attendees will have killed themselves. If this was the case I think someone would have noticed.

    Now the authors may fall back on the claim that all the reduction is second-hand and driven by gun owners closing their ranges for the day while they go to the annual conference. This is

    This is slightly more plausible, but even so, these numbers just don’t make sense.

    There aren’t reliable data on the numbers of suicides at gun ranges, but OC Register reports that LA, Orange and San Diego counties logged more than 64 suicides between 2000 and 2012 across 20 shooting ranges out of 17,800 total suicides in that time period (0.3%). That’s nowhere near 59%.
    https://www.ocregister.com/2014/01/24/suicidal-customers-see-a-way-out-at-gun-ranges/

    After doing this I realized two biases in their population that might skew the results in the opposite direction. 1) the data only looks at hospital admittances which means it entirely misses any gun injuries that result in death prior to reaching the hospital (the vast majority of suicides and many homicides) and 2) if you multiply out the number of injuries in their data to the total population size it only equal 24,000 a year, which is well below the true gun injury rate (suggesting that the population is unrepresentative). Taking these together makes the NRA attendees only responsible for 4-5% of gun injuries a year (still totally unreasonable, but a fairly big difference).

    So ignoring (which we shouldn’t) the statistical issues, the weakest form of the claim is 4% of non-fatal gun injuries are causally attributable (primarily indirectly) to NRA attendees.

    • Also note that if they do fall on the indirect responsibility claim, then their entire policy and health focus (that experienced gun users are as likely or more likely to injure someone with a gun) makes no sense. By the way if it turns out this is false, then this paper is doing a huge disservice to public debate by potentially arguing against a policy that might actually reduce the harm of guns. I suspect the NRA will turn around and happily cite this study next time someone suggests mandatory training before you can get a gun license.

    • One last thing, 1 percentage point of their effect is driven by changes in their denominator (which is the number of admittances to hospital), which happens to be slightly lower on the conference days which pushes up the percentage of admittances which are gun related.

    • I haven’t unpacked your analysis entirely, but I suspect based on the wording of their abstract etc that they’re looking *only* at unintentional injuries, so not suicides, not homicides, and not crime. I’m not sure of course, but it’s certainly in line with their narrative about decline in injury with reduced firearm use (hunting, shooting ranges etc).

      So, based on your quoted number we’d be talking about 461 unintentional deaths and 15928 unintentional non-fatal injuries. For ease of calculation, I’ll focus on just the non-fatal injuries. If we spread them evenly by week, that’s 306/wk and if we spread them evenly by location across say 50 states, that’s 6 per week in each state (or those orders of magnitude… big states maybe 12 or 15, small states maybe 1 or 3). If they see 20% reduction in rate what they’re talking about is 1 fewer unintentional injury in the week of the NRA meeting in the state where the meeting is taking place…

      Or at least those orders of magnitude. Since injuries come in integer numbers, the noise is going to be incredible (if there are 6 per week on average, yet in some weeks maybe 3 people are injured in one extra incident, and in some weeks maybe no injuries occur at all… )

      This is utter noise chasing with political motivation. Note how in their graph, there’s a spike before the convention and a dip during the convention. My guess is that conventions are weekend activities, or overlap weekends, so perhaps most of their effect is from shifting one injury from the weekend of the event to the friday before or something like that.

      • Agree about the noise chasing.

        Their definition is simply whether a patient who turns up at a hospital has a gun wound according ICD codes, so that mixes in intentional and unintentional.

        • If they mix in intentional which includes suicide and homicide or crime then the noise and non-causality of the analysis becomes even bigger. People simply don’t go to shooting ranges in order to commit say violence on members of an opposing drug cartel. Last time I looked it was something like 10k crime related gun deaths and 20k suicides, and if you assume the intentional injuries follow a similar pattern, you’re talking about crime contributing 10k deaths and 20k injuries. per week and per state, that’s 30000/52/50 = 11.5 per week per state.

          Now that we’ve got intentional issues involved, causally you have to unpack the idea that perhaps because there are gun owners all concentrating into a certain state/region the active criminals might just take a hiatus from some of their activities. 20% reduction means 11.5 * 0.2 ~ 2 fewer crimes in the week of the NRA meeting (or thereabouts). This is consistent not with “gun owners being away from their guns” but rather “more citizen gun owners = bad environment for committing crimes”. For perhaps several reasons. For example when large conventions come into town regardless of the topic, the police may be out in force more… could be anything.

          In any case, doing a good job of this requires unpacking it all in a serious way, and that doesn’t seem to be anything like what is going on here. Of course, as soon as it’s published in NEJM it becomes *peer reviewed fact* that can support political arguments.

          To do this right, you’d want to get data on

          1) Unintentional injury/death due to accident (ie. hunting/shooting range accidents, potentially reduced by alternate activities)

          2) Intentional suicide, both all causes and by firearm (perhaps social interaction of members reduces suicide rate overall and by firearm?)

          3) Intentional injury/death due to drug related crime (has very little plausible connection to NRA members, but presence of convention might reduce crime or increase policing, you’d want control conventions as well, perhaps ComiCon or something, perhaps just having a convention reduces violent crime activity nearby because it’s not the profitable time to be committing crimes)

          4) Intentional injury/death due to non-drug-related personal crime (plausibly people who are at NRA conventions are not getting in fights with their cousin or wife or coworkers or etc, though plausibly perhaps they go to NRA conventions to purchase firearms for use in crimes… you need to consider all the possible mechanisms)

          5) Non firearms related violence: perhaps conventions simply alter behavior in such a way as to either increase or decrease opportunities for conflict. You’d again want a few “control” conventions, ComiCon, Electronics tradeshows, whatever.

          I think this plays into the idea that statistics has sold itself, especially among social sciences and medicine, as a means to answer real hard questions without considering any mechanism, by just pointing and clicking with standard software analysis. It just doesn’t work that way.

        • If I read the appendix correctly, their primary analysis is national rates and its only in their subgroup analysis that they break it down by whether the convention was in the same state.

        • The bit that frustrated me was this quote from the appendix when they have just outlined that there are around 100,000 gun injuries a year in the previous paragraph:

          “We hypothesized that such a relationship could be plausibly identified in large, national data given that attendance at NRA meetings is substantial – approximately 81,000 NRA members attended the 2017 annual meeting held in Atlanta, GA13”.

        • Jon:

          I know I keep pounding on this one issue over and over again, but . . . I do feel that a big problem here is the attitude that rigor is found in statistically significant p-values.

          The (typically implicit) reasoning goes as follows:
          A. If there’s nothing going on (or if the signal is overwhelmed by noise), then it should be very hard to attain statistical significance.
          B. A particular comparison is statistically significant.
          C. Therefore, there is almost certainly a consistent and real underlying pattern being detected.

          The slithy thing about this reasoning is that it covers all bases: No need to worry about measurement issues, because the statistical significance retroactively bestows rigor on the entire study, design and data collection included.

          I’d’ve thought that people would stop thinking that way after Simmons, Nelson, and Simonsohn (2011), but I guess not. And of course it doesn’t help when PNAS, New England Journal of Medicine, Lancet, etc., publish and promote this sort of work.

        • “Of course, as soon as it’s published in NEJM it becomes *peer reviewed fact* that can support political arguments.”

          Well, if you look carefully, you will see that this was published in the Correspondence section. It is a letter to the editor, not an article. And the NEJM does *not* peer review letters. Letter selections are made by the editors.

          Now, it is possible that the material in the letter and the Appendix were first submitted in article form, and perhaps the response either by an associate editor, or after a round of peer review was “we won’t publish this as an article, but would reconsider it if submitted as a letter.” So it may have undergone some peer review in that way, but if so, it was rejected after peer review.

          I think that even relatively naive journalists understand that letters are different from articles and do not carry the same imprimatur as article publication, even in a very prestigious journal.

          To be clear, I’m not saying that this would deserve greater credibility if it had undergone peer review. I’m just pointing out that it very well may not have, and if it did, it was shot down. And I doubt that the media and press will treat this as if it were a NEJM _article_.

        • I think you have a lot more confidence in journalists and the public, even scientifically literate public who are outside the field of medicine, than I do. Considering that I didn’t have any idea about what you just said (and I definitely appreciate you mentioning it), I suspect basically everyone who isn’t in medicine will make the same assumption I did: it’s in NEJM it’s a peer reviewed “fact”. I think that will be particularly true of people who are politically motivated to “find facts that support their argument”. After all, how many “opinion” type letters to the editor come along with original research on proprietary datasets involving multiple regressions addressing what is clearly a research question?

      • “I suspect based on the wording of their abstract etc that they’re looking *only* at unintentional injuries, so not suicides, not homicides, and not crime.”

        That is a reasonable expectation, but that’s not what they did. In their own words: “We defined a firearm injury as either an emergency department visit or hospitalization for firearm injury, identified according to International Classification of Disease, Ninth Edition (ICD-9) injury diagnosis codes E922.X, E955.X, E965.X, E970.X, E985.X, or E979.4.” Here are the relevant descriptors for the ICD-9 codes:
        – Unintentional: E922.0-.3,.8,.9
        – Intentional self-inflicted: E955.0-.4
        – Assault: E965.0-4, E979.4
        – Undetermined: E985.0-.4
        – Other (injury due to legal intervention by firearms): E970

        While most of the motivation and potential mechanisms are related to the unintentional injury category, they sum across all firearm injury categories for the studied measure. There’s more at the online appendix: http://www.nejm.org/doi/suppl/10.1056/NEJMc1712773/suppl_file/nejmc1712773_appendix.pdf

        • Thanks for clarifying. So they had the opportunity to use E922.0-… for unintentional in their analysis, but I bet the numbers are too small and they couldn’t get anything from that… so they aggregated more info etc… until they got something, anything, statistically significant to pop out, and then voila it has the desired sign so ship it to the editor…

          sigh

  4. They could have also highlighted their finding that the percent of crimes involving firearms does not change during the week of an NRA convention nor the 3 weeks prior to or following the convention (it doesn’t budge much from 3 percent). Maybe gun crimes are much safer during NRA conventions?

    • Actually, I note that one of the authors is affiliated with Harvard Medical School. Anecdotally, some in health care have the distinct impression that the NEJM is not that difficult to publish in when there is a Harvard affiliation involved. I have no data to support or refute this impression, but it is not an uncommon view. Of course it may be wrong and just reflect envy, or reasoning based on observed numerators and imaginary denominators.

      • By “some in health care” do you mean “some in health care who are not affiliated with Harvard Medical School and who therefore have a distorted sense for how publishing works at NEJM”?
        When it comes to original research, no, it is not true.
        It is true when it comes to those Perspectives pieces, in that you might be able to get them to not triage out your manuscript immediately. But there’s nothing new about that. All places have a family discount.

        • “By “some in health care” do you mean “some in health care who are not affiliated with Harvard Medical School and who therefore have a distorted sense for how publishing works at NEJM”?”

          As I said in my comment “Of course it may be wrong and just reflect envy, or reasoning based on observed numerators and imaginary denominators.” So yes.

  5. I skimmed through a similar (in spirit) article a few weeks back

    https://www.aeaweb.org/articles?id=10.1257/app.20160031

    College Party Culture and Sexual Assault
    Abstract
    This paper considers the degree to which events that intensify partying increase sexual assault. Estimates are based on panel data from campus and local law enforcement agencies and an identification strategy that exploits plausibly random variation in the timing of Division 1 football games. The estimates indicate that these events increase daily reports of rape with 17–24-year-old victims by 28 percent. The effects are driven largely by 17–24-year-old offenders and by offenders unknown to the victim, but we also find significant effects on incidents involving offenders of other ages and on incidents involving offenders known to the victim.

    I was wondering if similar critiques could be applied to this paper as well. If the linked paper does a better job at analyzing the underlying causal claims, I would be curious to know why. Any thoughts or insights regular commentators?

  6. “(a) these super-selective medical journals are so selective, that whatever they do publish, is a bit of a crapshoot”

    Can this kind of damned if you do, damned if you don’t mentality really make sense? Seems cynical to hold high and low acceptance rates against a journal.

    • Jackson:

      I’m not damning them either way; I’m just trying to be descriptive. An outsider to the system might naively think: “Hey, it was published in Lancet/NEJM/PNAS/etc., therefore it must be a solid piece of research, no?” These journals all publish lots of great stuff, but they also make mistakes in certain systematic ways, and I’m explaining how this can happen.

  7. How can 0.1% of shooters (80,000) going to an NRA convention reduce accidental shootings by 20%?

    JAMA: “if some venues of firearm use (e.g., ranges or hunting grounds) are closed during dates of NRA annual meetings, reductions in overall firearm injuries during meeting dates could also be observed.”

    — Sure, it “could” be the case that more than 20% of the gun ranges close down, nationally. But I have never seen a gun club closed because of the NRA convention, and an online check of several Event Calendars in NH showed none that are closing this year (May 4-6). The authors of the JAMA article didn’t bother to do any such research, easy though that would be, or they disliked the results too much to add even to their Supplementary Appendix.

    JAMA: “Similarly, if individuals are more likely to engage in recreational firearm use in groups, then the absence of some group members due to NRA meeting attendance may reduce the likelihood of remaining group members to use firearms during the dates of NRA meetings.”

    — This phenomenon couldn’t bring 0.1% of shooters attending up to even 1%, let alone anywhere near 20%, of shooting groups affected.

  8. This isn’t really a criticism of the researchers because you want researchers to study domains that they don’t necessarily have first-hand experience with, but these results are *obviously* wrong to anyone with even a bit of familiarity with firearms and firearm “culture”. I would bet my lifetime earnings that this is wrong.

Leave a Reply to D Kane Cancel reply

Your email address will not be published. Required fields are marked *