Stuart Hurlbert writes:
A colleague recently forwarded to me your 2012 paper with Hill and Yajima on the multiple comparison “non-problem”, as I call it.
You and your colleagues might find of interest a 2012 paper [with Celia Lombardi] that reaches similar conclusions by a colleague and myself which is attached. Similar but not identical, as we are a bit Bayesian-shy after seeing so many exaggerated claims made for Bayesian approaches over recent decades.
I take pride in having for a few decades defended many colleagues against editors (and many graduate students against faculty members) who demanded “corrections” for multiple comparisons. We’ve gotten no small number of editors and professors to back off their unreasonable demands. Paper tigers all!
I agree that those lopsided tests are too-clever-by-half. I think a lot of statistical methods have this flavor, that they are a solution to a mathematical problem that has been posed without a careful enough sense of whether the problem is worth solving in the first place. Another example are tests of contingency tables with fixed margins. People have written many many papers on the topic, sometimes with much technical sophistication, but from an applied perspective it’s almost never the right question, given that it’s extremely rare to have an experiment or observational study in which it would make sense for the margins of a table to be fixed.