Another entry in the growing literature on systematic flaws in the scientific research literature.
This time the bad tidings come from Marjan Bakker and Jelte Wicherts, who write:
Around 18% of statistical results in the psychological literature are incorrectly reported. Inconsistencies were more common in low-impact journals than in high-impact journals. Moreover, around 15% of the articles contained at least one statistical conclusion that proved, upon recalculation, to be incorrect; that is, recalculation rendered the previously significant result insignificant, or vice versa. These errors were often in line with researchers’ expectations.
Their research also had a qualitative component:
To obtain a better understanding of the origins of the errors made in the reporting of statistics, we contacted the authors of the articles with errors in the second study and asked them to send us the raw data. Regrettably, only 24% of the authors shared their data, despite our request being quite specific and our assurances that the authors would remain anonymous. . . .
The paper by Bakker and Wicherts features a truly ugly graph (Figure 2) and also breaks a rule by reporting percentages to inappropriate precision (no, you don’t have to categorize 33/113 as “29.2%”), but I’ll forgive them because I like this sort of work. It’s important and represents a lot of effort. Personally, I think Jelte Wicherts, E. J. Wagenmakers, and John Ioannidis are much more deserving of the ASA Founders Award than is, say, I dunno, Ed Wegman?