Responses to my criticisms of Bayesian statistics

On April Fool’s Day I posted my article, “Why I don’t like Bayesian statistics.” At the time, some commenters asked for my responses to the criticisms that I’d raised.

My original article will appear, in slightly altered form, in the journal Bayesian Analysis, with discussion and rejoinder. Here’s the article, which begins as follows:

Bayesian inference is one of the more controversial approaches to statistics. The fundamental objections to Bayesian methods are twofold: on one hand, Bayesian methods are presented as an automatic inference engine, and this raises suspicion in anyone with applied experience. The second objection to Bayes comes from the opposite direction and addresses the subjective strand of Bayesian inference. This article presents a series of objections to Bayesian inference, written in the voice of a hypothetical anti-Bayesian statistician. The article is intended to elicit elaborations and extensions of these and other arguments from non-Bayesians and responses from Bayesians who might have different perspectives on these issues.

And here’s the rejoinder, which begins:

In the main article I presented a series of objections to Bayesian inference, written in the voice of a hypothetical anti-Bayesian statistician. Here I respond to these objections along with some other comments made by four discussants.

You’ll have to wait until the journal issue comes out to read the discussions, by Jose Bernardo, Joe Kadane, Larry Wasserman, and Stephen Senn. And thanks to Bayesian Analysis editor Brad Carlin for putting this all together.

12 thoughts on “Responses to my criticisms of Bayesian statistics

  1. I'm sure real life is the best crucible, but is there no way to design an Axelrod tournament where different worlds are thrown at these two (and all their cousins) predictors to see who comes out on top?

  2. Let me define "frequentist" to mean "correct" and Bayesian to mean "incorrect." Using this definition, I defy you to find a single example of a correct Bayesian analysis. Moreover, I defy you to find a single example of an incorrect frequentist analysis.

    What's more, physicists really want answers like "given that the speed of light is 299,792,458 m/s, what is the probability that I would get the measurements I got." What physicist would possibly be interested in "given the measurements that I got, what is the probability that the speed of light is 299,792,458 m/s"? Only an incorrect (Bayesian) physicist would be interested in the the latter question. It's not like we care about the speed of light.

    I think I've adequately demonstrated that Bayesians are incorrect and Frequentists are correct. Please don't bother me with any more of this Bayesian nonsense.

  3. I don't claim to know more than the basic premises (and promises) of Bayesian statistics, but I am well aware of Bayesianism in epistemology. A good critique of that school may be found in Miller's book "Critical Rationalism" (especially section 6.5). In my research, I TRY to produce point estimates of causal parameters, which conjecturally come from an unbiased process. I show a SE, which gives some idea about random properties of the process that generated the estimate. I don't engage in questions of psychological beliefs of anyone. Anyone is welcome to test again, and produce new estimates. Anyone may engage in criticism of any estimate on accounts of bias or randomness. And anyone, including me of course, may be severely wrong and at least mildly wrong. The road has no natural end.

    No doubt that Bayesian statistics offer a coherent system of logical inference, but its problem is irrelevance to the business of science (and the example above about the speed of light is a good one). The Bayesian inference is not the kind of inference we should seek in science (although it might be useful in other domains). No argument–serious or funny–can convince some of us that scientific knowledge amounts to the shaping of beliefs about reality (in the form of posterior probabilities or credible intervals). The very basic idea of distributing the probability mass for a parameter is foreign to my grasping of science.

    Moreover, if I understand the story correctly, Bayesians justify their need to follow the axioms of probability by playing the betting game (i.e., the axioms are needed to avoid being a constant loser; the Dutch Book argument). Otherwise, they can't apply the axioms to "probability of statements". So, is this the foundation of scientific knowledge? A system of inference that is founded on axioms we should accept because otherwise, we are sure to lose a bet? I hope we are not just betting right or wrong against Nature.

  4. Eyal,

    Please read my writings on the subject, for example the rejoinder linked to above and chapter 1 of Bayesian Data Analysis. I do not think that Bayesian statistics needs to have anything to do with subjectivity, and I don't motivate its use by betting games. You are objecting to a thing called "Bayesian statistics" that is not the Bayesian statistics that I do.

    I do lots of science, and I don't think anything is gained by statements such as "The Bayesian inference is not the kind of inference we should seek in science." You can do science however you want; I'd like my own science to be judged on its results, not on the fact that there are some other people out there called Bayesians who like to think in terms of subjective probability.

  5. David Miller wrote about “46,656 varieties of Bayesianism catalogued by Good”; maybe he was joking–just like your April 1 posting. Although the exchange we are having might quickly deteriorate into claims of misunderstanding and semantics, I now understand that you are not engaged in specifying prior probabilities of something, computing posterior probabilities of something, and presenting credible intervals for parameters.

    RE: “I'd like my own science to be judged on its results”. I don’t know of a method to judge your science (or my science) on your (or my) results. As we all know, results from the best rationalized method could be plainly wrong, or severely wrong; there is no “method” to judge any particular result, although a lot of rhetoric may be used instead.

    Although long forgotten, there is not much more to science than its method (and critical arguments about the method a scientist is using.) Therefore, I cannot accept your wish to exclude the argument "The Bayesian inference is not the kind of inference we should seek in science" from the discussion about the scientific method. In fact, what prompted my writing was your criticism of criticism of Bayesian statistics. You argued for your method, which I might have misunderstood, by arguing against arguments against your method. Why are you denying that right from others by dismissing my statement? Instead of my posting I could have posted one sentence: “ I don’t think there was anything to be gained by your attempt to rebut criticism of Bayesianism: You can do science however you want.”

    I can only repeat what I said: The Dutch Book argument is required for anyone who wants to use the axioms of probabilities for probabilities that do not reflect physical probabilities (i.e., probabilities that do not apply to physical events or states.) It was clearly explained by Greenland in his paper on probabilistic induction. Maybe your version of Bayesian statistics is indeed restricted to probabilities that do not reflect “constructs of the mind”.

  6. Eyal,

    Intonation is notoriously difficulty to convey in written exchanges, so I will play this completely straight.

    1. The April Fool's article was a joke but it had some serious aspects. I think some of the criticisms of Bayesian statistics are silly but some are interesting and worth responding to.

    2. My rejoinder (also linked to above) is entirely serious.

    3. You write that I am "not engaged in specifying prior probabilities of something, computing posterior probabilities of something, and presenting credible intervals for parameters." You are mistaken. I am engaged in all the above. In most of the applications I work on, my prior probabilities do not represent my subjective belief or my betting odds or anything like that, but they are prior probabilities nonetheless, in the mathematical sense of being part of the models that I use to fit data and make predictions.

    4. You write, "I don’t know of a method to judge your science (or my science) on your (or my) results." There are various ways of doing this: in the physical world, "results" could be that the bridge stands up, the airplane flies, the bearing lasts longer before wearing out, etc. From a statistical perspective, there's external validation: checking predictions on new data.

    5. You write: "You argued for your method, which I might have misunderstood, by arguing against arguments against your method." I would like to understand my methods better and to improve these methods. Understanding outside criticisms can be a step toward this improvement. I didn't write these articles in order to talk people out of using non-Bayesian methods; I wrote them in order to help others (and myself) understand Bayesian methods better. That's why I published the articles in the journal "Bayesian Analysis."

    6. I have no objection to the Dutch Book argument in the context of betting. But I've never been so convinced by the Dutch Book argument in the context of scientific inference. I rarely bet on my inferences. I see inferences as a way of increasing my understanding of the world.

    7. My view of Bayesian statistics is best expressed in Bayesian Data Analysis. We discuss the foundations of probability in chapter 1 of that book. Right here I'll just emphasize that we are happy to assign probabilities to everything, including physical events such as a die landing on a "6" and also nonphysical events such as a person voting for John McCain in November. I view these probability models as mathematical constructs that, if done well, can provide useful approximations to the world. I do not in general view them as subjective or justified by betting.

  7. Thank you. I now fully understand our diverging points of view.

    RE: "I have no objection to the Dutch Book argument in the context of betting. But I've never been so convinced by the Dutch Book argument in the context of scientific inference. I rarely bet on my inferences. I see inferences as a way of increasing my understanding of the world."

    I know that you don't bet on inference, but you need to find a good reason for using the axioms of probability whenever you apply the axioms to "prior probabilities nonetheless, in the mathematical sense of being part of the models". If that sense is not the sense of physical probabilities, then the only rationalized sense is the Dutch Book argument. You have no other reason to claim that Pr(A or B) must be equal to Pr(A)+ Pr(B)–setting aside the principal principle, which deserves a separate discussion. I am sorry, but there is no way out of the corner here. Greenland said it loud and clear, and he is much smarter and knowledgeable than me. Read his article on probability logic and probabilistic induction. (Induction and Bayesiansim have been companions for many years. Read Popper.)

    Mathematics often proves to help us understand the complex world in which we live. But sometimes, it remains just a wonderful intellectual activity. If the math of superstring theory turns out to be divorced from reality, then it is part of the wrong model we fit. Maybe there is some analogy here to prior probabilities. Not exact analogy, because I have no objection to describing Bayesian probabilities as realities of the human mind.

    By the way, bridges collapse, too — despite great predictions and testing. Too bad that we can't be certain of anything, even in the testing of predictions.

    This is your blog, so you get to post the last word. I will have to diverge to other pressing business. Thanks for the exchange.

  8. Eyal,

    Thanks for the comments. I know that bridges collapse and am not asking for certainty. However, many bridges do not collapse and I think this is a sign of successful science and engineering efforts. Similarly with applied statistics. Beyond that, I hate to keep recommending chapter 1 of Bayesian Data Analysis to you, but I think you'll have a lot better sense of what I'm talking about if you look at that. I think you're hung up on that subjective probability thing–it confuses a lot of people who think about Bayesian statistics–and I recommend you take a look at some other perspectives.

  9. "I know that bridges collapse and am not asking for certainty. However, many bridges do not collapse and I think this is a sign of successful science and engineering efforts."

    You make a very good point with the above statement, and I agree completely. I wish more people looked at science this way.

  10. "It's unclear if everyone here knows this, but for the record, the speed of light in SI units is defined as 299,792,458 m/sec. It is not a measured quantity."

    I

Comments are closed.