Joshua Vogelstein points me to this paper by Gerd Gigerenzer and Julian Marewski, who write:
The idol of a universal method for scientific inference has been worshipped since the “inference revolution” of the 1950s. Because no such method has ever been found, surrogates have been created, most notably the quest for significant p values. This form of surrogate science fosters delusions and borderline cheating and has done much harm, creating, for one, a flood of irreproducible results. Proponents of the “Bayesian revolution” should be wary of chasing yet another chimera: an apparently universal inference procedure. A better path would be to promote both an understanding of the various devices in the “statistical toolbox” and informed judgment to select among these.
I agree, although I might change “select among” to “combine” in the final sentence.
I think your readers might like this paper. Most statisticians I know prefer (and recommend) a very small set of tools in the toolbox, much like most doctors use a very small subset of diagnostic codes available from the DSM-5 and ICD-10. It makes sense because we know a certain set better than others. Perhaps we could do better at explicitly acknowledging that the reason we would use our preferred method is familiarity, rather than superiority, possibly referencing Hoadley’s
I coined a phrase called the “Ping-Pong theorem.” This theorem says that if we revealed to Professor Breiman the performance of our best model and gave him our data, then he could develop an algorithmic model using random forests, which would outperform our model. But if he revealed to us the performance of his model, then we could develop a segmented scorecard, which would outperform his model.
Regarding the toolbox, yes, that’s the topic of my paper, “How do we choose our default methods?”, which I recommend to all statistics students.
Regarding the “ping-pong theorem,” I prefer the term “leapfrog” which I think better characterizes the forward progress that comes from building upon and improving the ideas of others. See footnote 1 of this paper:
Progress in statistical methods is uneven. In some areas the currently most effective methods happen to be Bayesian, while in other realms other approaches might be in the lead. The openness of research communication allows each side to catch up: any given Bayesian method can be interpreted as a classical estimator or testing procedure and its frequency properties evaluated; conversely, non-Bayesian procedures can typically be reformulated as approximate Bayesian inferences under suitable choices of model. These processes of translation are valuable for their own sake and not just for communication purposes. Understanding the frequency properties of a Bayesian method can suggest guidelines for its effective application, and understanding the equivalent model corresponding to a classical procedure can motivate improvements or criticisms of the model which can be translated back into better understanding of the procedures. From this perspective, then, a pure Bayesian or pure non-Bayesian is not forever doomed to use out-of-date methods, but at any given time the purist will be missing some of the most effective current techniques.
Again, note the idea of seeking to understand and improve methods that come from alternative perspectives, not merely choosing between static alternatives.