The newly emerging field of Social Neuroscience has drawn much attention in recent years, with high-profile studies frequently reporting extremely high (e.g., >.8) correlations between behavioral and self-report measures of personality or emotion and measures of brain activation obtained using fMRI. We show that these correlations often exceed what is statistically possible assuming the (evidently rather limited) reliability of both fMRI and personality/emotion measures. The implausibly high correlations are all the more puzzling because social-neuroscience method sections rarely contain sufficient detail to ascertain how these correlations were obtained. We surveyed authors of 54 articles that reported findings of this kind to determine the details of their analyses. More than half acknowledged using a strategy that computes separate correlations for individual voxels, and reports means of just the subset of voxels exceeding chosen thresholds. We show how this non-independent analysis grossly inflates correlations, while yielding reassuring-looking scattergrams. This analysis technique was used to obtain the vast majority of the implausibly high correlations in our survey sample. In addition, we argue that other analysis problems likely created entirely spurious correlations in some cases.
This is cool statistical detective work. I love this sort of thing. I also appreciate that the article has graphs but no tables. I have only two very minor comments:
1. As Seth points out, the authors write that many of the mistakes appear in “such prominent journals as Science, Nature, and Nature Neuroscience.” My impression is that these hypercompetitive journals have a pretty random reviewing process, at least for articles outside of their core competence of laboratory biology. Publication in such journals is taken much more of a seal of approval than it should be, I think. The authors of this article are doing a useful service by pointing this out.
2. I think it’s a little tacky to use “voodoo” in the title of the article.