Criminology corner: Type M error might explain Weisburd’s Paradox

[silly cartoon found by googling *cat burglar*]

Torbjørn Skardhamar, Mikko Aaltonen, and I wrote this article to appear in the Journal of Quantitative Criminology:

Simple calculations seem to show that larger studies should have higher statistical power, but empirical meta-analyses of published work in criminology have found zero or weak correlations between sample size and estimated statistical power. This is “Weisburd’s paradox” and has been attributed by Weisburd, Petrosino, and Mason (1993) to a difficulty in maintaining quality control as studies get larger, and attributed by Nelson, Wooditch, and Dario (2014) to a negative correlation between sample sizes and the underlying sizes of the effects being measured. We argue against the necessity of both these explanations, instead suggesting that the apparent Weisburd paradox might be explainable as an artifact of systematic overestimation inherent in post-hoc power calculations, a bias that is large with small N. Speaking more generally, we recommend abandoning the use of statistical power as a measure of the strength of a study, because implicit in the definition of power is the bad idea of statistical significance as a research goal.

I’d never heard of Weisburd’s paradox before writing this article. What happened is that the journal editors contacted me suggesting the topic, I then read some of the literature and wrote my article, then some other journal editors didn’t think it was clear enough so we found a couple of criminologists to coauthor the paper and add some context, eventually producing the final version linked here. I hope it’s helpful to researchers in that field and more generally. I expect that similar patterns hold with published data in other social science fields and in medical research too.

12 thoughts on “Criminology corner: Type M error might explain Weisburd’s Paradox

  1. Andrew (Torbjørn and Mikko),

    I’m really glad that your article will be appearing in the special issue. I know it was pretty frustrating before with the earlier friction (it probably felt similar to what happened to you at AJS).

    Best,
    Justin

  2. And this is nuts: “We wanted to reanalyze the dataset of Nelson et al. However, when we asked them for the data, they said they would only share the data if we were willing to include them as coauthors.”

    WTF?

      • We did not include such a comment at first, but but then again: not using the original data needed an explanation, and that was how it was. (We could have asked yet again, of course, but I got the feeling it would not have changed the outcome).

      • Let me play devil’s advocate for a moment and make the argument that a request for co-authorship in exchange for data sharing is maybe not such an awful thing. The people who collected the data have a stick in the game, have invested substantial effort in collecting the data, and have a vested interest in monitoring any subsequent analyses to ensure their validity, particularly in the context of potential criticism. Co-authorship costs little, and provides reassurance to the data owner. While I haven’t done anything about it, I have previously contemplated offering co-authorship to data owners in conjunction with requesting a copy of their data, simply because it seemed like a nice thing to do in exchange for their going to the effort of finding and providing the data to me.

        All that being said, an argument can be made that the data holder will be compensated by having their paper(s) referenced in conjunction with publication of the results of the external analyses. Also, we would of course like to see more data published openly, so that a formal request is not needed.

        • Clark:

          One concern is that with coauthorship comes partial control of the product. What happens if someone sends me their data in exchange for coauthorship, and then I want to write something that they disagree with? I’m not saying such problems can’t be resolved, but it’s a concern.

        • I agree, it is a concern. And I don’t have a solution, other than negotiation with the data owner, and disclosure of the relationship upon publication. At the same time, I find it hard to believe that a follow-on study which includes the data owner as co-author would be rejected by reviewers for that reason. I’m just making the point that this is not a black-and-white issue.

  3. I tried to explain this exact thing to a colleague who was surprised when the very large, statistically significant effect she observed in a small (n = 30) pilot study diminished when more data was collected. Her thinking was that having observed a statistically significant effect in a low power situation, this was evidence that an ever bigger effect would be observable once more data was collected.

    This is a pretty general problem in a good deal of criminology, and I’m happy to see this showing up in one of the journals.

Leave a Reply to Anupam Cancel reply

Your email address will not be published. Required fields are marked *