Duality between multilevel models and multiple comparisons adjustments, and the relevance of this to some discussions of replication failures

Pascal Jordan writes: I stumbled upon a paper which I think you might find interesting to read or discuss: “Ignored evident multiplicity harms replicability—adjusting for it offers a remedy” from Yoav Zeevi, Sofi Astashenko, Yoav Benjamini. It deals with the … Continue reading

The garden of forking paths: Why multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time

Kevin Lewis points us to this article by Joachim Vosgerau, Uri Simonsohn, Leif Nelson, and Joseph Simmons, which begins: Several researchers have relied on, or advocated for, internal meta-analysis, which involves statistically aggregating multiple studies in a paper . . … Continue reading

Analyze all your comparisons. That’s better than looking at the max difference and trying to do a multiple comparisons correction.

[cat picture] The following email came in: I’m in a PhD program (poli sci) with a heavy emphasis on methods. One thing that my statistics courses emphasize, but that doesn’t get much attention in my poli sci courses, is the … Continue reading

In one of life’s horrible ironies, I wrote a paper “Why we (usually) don’t have to worry about multiple comparisons” but now I spend lots of time worrying about multiple comparisons

Exhibit A: [2012] Why we (usually) don’t have to worry about multiple comparisons. Journal of Research on Educational Effectiveness 5, 189-211. (Andrew Gelman, Jennifer Hill, and Masanao Yajima) Exhibit B: The garden of forking paths: Why multiple comparisons can be … Continue reading