Eric Archer forwarded this document by Nick Freemantle, “The Reverend Bayes—was he really a prophet?”, in the Journal of the Royal Society of Medicine:
Does [Bayes’s] contribution merit the enthusiasms of his followers? Or is his legacy overhyped? . . .
First, Bayesians appear to have an absolute right to disapprove of any conventional approach in statistics without offering a workable alternative—for example, a colleague recently stated at a meeting that ‘. . . it is OK to have multiple comparisons because Bayesians’ don’t believe in alpha spending’. . . .
Second, Bayesians appear to build an army of straw men—everything it seems is different and better from a Bayesian perspective, although many of the concepts seem remarkably familiar. For example, a very well known Bayesian statistician recently surprised the audience with his discovery of the P value as a useful Bayesian statistic at a meeting in Birmingham.
Third, Bayesians possess enormous enthusiasm for the Gibbs sampler—a form of statistical analysis which simulates distributions based on the data rather than solving them directly through numerical simulation, which they declare to be inherently Bayesian—requiring starting values (priors) and providing posterior distributions (updated priors). However, rather than being of universal application, the Gibbs sampler is really only advantageous in a limited number of situations for complex nonlinear mixed models—and even in those circumstances it frequently sadly just does not work (being capable of producing quite impossible results, or none at all, with depressing regularity). . . .
The looks negative, but it you read it carefully, it’s an extremely pro-Bayesian article! The key phrase is “complex nonlinear mixed models.” Not too long ago, anti-Bayesians used to say that Bayesian inference was worthless because it only worked on simple linear models. Now their last resort is to say that it only works for complex nonlinear models!
OK, it’s a deal. I’ll let the non-Bayesians use their methods for linear regression (as long as there aren’t too many predictors; then you need a “complex mixed model”), and the Bayesians can handle everything complex, nonlinear, and mixed. Actually, I think that’s about right. For many simple problems, the Bayesian and classical methods give similar answers. But when things start to get complex and nonlinear, it’s simpler to go Bayesian.
(As a minor point: the starting distribution for the Gibbs sampler is not the same as the prior distribution, and also that Freemantle appears to be conflating a computational tool with an approach to inference. No big deal—statistical computation does not seem to be his area of expertise—it’s just funny that he didn’t run it by an expert before submitting to the journal.)
Also, I’m wondering about this “absolute right to disapprove” business. Perhaps Bayesians could file their applications for disapproval through some sort of institutional review board? Maybe someone in the medical school could tell us when we’re allowed to disapprove and when we can’t.
Yes, yes, I see that the article is satirical. But, in all seriousness, I do think it’s a step forward that Bayesian methods are associated with “complex nonlinear mixed models.” That’s not a bad association to have, since I think complex models are more realistic. To go back to the medical context, complex models can allow treatments to have different effects in different subpopulations, and can help control for imbalance in observational studies.
There’s something that fascinates me about these aggressive anti-Bayesians: it’s not enough for them to simply restrict their own practice to non-Bayesian methods; they have to go the next step and put down Bayesian methods that they don’t even understand. This topic comes up from time to time on this blog, for example in discussing the uninformed rants of David Hendry (“I don’t know why he did this, but maybe it’s part of some fraternity initiation thing, like TP-ing the dean’s house on Halloween”), John DiNardo (“if philosophy is outlawed, only outlaws will do philosophy”), and various others (the Foxhole Fallacy).
I was also inspired to write an anti-Bayesian rant of my own (with discussion), and Christian Robert and I considered anti-Bayesianism in the classic probability text of Feller and elsewhere (see the article with discussion).
Misinformed anti-Bayeisanism doesn’t look like it’s going away anytime soon, but on the plus side it seems to be moving toward the fringes. Bayes is here to stay, and I’m happy to see that non-Bayesian regularization is very popular too.
P.S. Just to remove any ambiguity here: I have no problem with non-Bayesians: those statisticians who for whatever combination of theoretical or applied reasons prefer not to use Bayesian methods in their own work. My problem is with anti-Bayesians who denigrate the Bayesian approach from a position of lack of understanding. As noted above, I’m happy to see that anti-Bayesianism has moved to the fringes, where it belongs. There’s always room in any discourse for a few extremists; it’s just not good if they have a lot of power.