“Two Dogmas of Strong Objective Bayesianism”

Prasanta Bandyopadhyay and Gordon Brittan write:

We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs.

I have not read their paper in detail but I think I pretty much agree with their criticism of classical or strong Bayesian philosophies of the objective or subjective variety. In particular, I agree with them that (a) the traditional Bayesian philosophy (which culminates in the posterior probability of a model being true) is not a good model for the evaluation and replacement of scientific theories, but (b) a fuller, falsificationist Bayesian philosophy can do the job.

I’d just like to add a few remarks:

1. As always, I find it misleading to focus on the prior distribution as the locus of subjective uncertainty. The data model is just as subjective. Or, I should say, it depends on context. In some problems, there is more reasonable agreement on the population model; in others, there is more agreement on the data model. It’s just that, for historical reason, “likelihood methods” have been grandfathered in as classical methods and thus don’t suffer the Bayesian taint.

It’s kind of like the Bible. All sorts of goofy stories that happened to have been placed in time before 100 BC become canonical; whereas everything that happened after is evaluated in the category of “history” rather than “religion.” This cuts both ways: in one direction, you have people who believe anything that happens to be in the official collection of biblical stories; on the other, historical stories get the benefit of being revisable by evidence. Something similar happens in many statistical problems, when all sorts of critical thinking gets applied to the prior distribution, whereas conventional likelihoods just get accepted.

Before going on to the next issue, let me qualify the above by recognizing Deborah Mayo’s point that, in typical cases, the data model differs from the prior distribution by being more accessible to checking. In practice, though, statisticians (including those of the classical or Bayesian variety who complain or rejoice about the subjectivity of prior distributions) often don’t take the opportunity to check the fit of their data models.

2. I don’t like the example on page 50. In the problems I’ve worked on, it’s never seemed to make any sense to talk about the posterior probability that a continuous parameter equals zero or that a particular model is true. As I’ve written on various occasions, I can see how such procedures can be useful but I don’t see them making any logical sense.

3. Statistical reasoning often seems to lend itself to a two-level expression of belief. For example, in evaluating a research paper, a reviewer might express some uncertainty about whether a result is truly statistically significant. This sort of logic seems odd from a scientific perspective (it’s sort of like evaluating the weight of an object by assessing how heavy it looks), but, in the context of the sociology of science, the evaluation of evidence is clearly an important thing that we do.

As I learned from Thomas Basbøll, Plato characterizes knowledge as “justified, true belief.” I like this definition, and it gives a clue as to the scientific relevance of statements such as, “I don’t think this finding is actually statistically significant,” even in completely Bayesian settings.

4. Bandyopadhyay and Brittan write, “Personalist Bayesians like Bruno de Finetti and Leonard J. Savage claim that there is only one such condition, coherence.” I’ve written about this before, but let me briefly say again that I see coherence as a structuring property rather than an attribute of inference. In nontrivial settings, our inferences won’t be coherent—if they were, we could just skip Bayesian inference, posterior integration, Stan, etc., and simply look at data and write down our subjective posterior distributions. When our inferences aren’t close to coherent, though, this is a problem. Thus, I think coherence is a valuable concept, but not because Bayesian inferences are coherent (they’re not) but because Bayesian inference provides a mechanism for finding and resolving incoherences.

5. Finally, I’ll link to my five papers on the philosophy of Bayesian inference:

Philosophy and the practice of Bayesian statistics (with Cosma Shalizi)

Rejoinder to discussion of that article

Philosophy and the practice of Bayesian statistics in the social sciences (with Cosma Shalizi)

Induction and deduction in Bayesian data analysis

Rejoinder to discussion of “Objections to Bayesian statistics”

22 thoughts on ““Two Dogmas of Strong Objective Bayesianism”

  1. Objective Bayesians and Christians that believes in Evolution make me always think “Just one more step brother, just one more, you’re right there”.

      • Oh, Really? Challenge; let’s do a little survey on the degree of religious beliefs among Bayesians and Non-Bayesians, aren’t you curious?

        PS: I have a strong a priori on this one but I only bow to data.

        • I have no idea of empirical correlations between statistical and religious attitudes. I just liked the analogy, which I’ll repeat here for convenience:

          . . . A defender of Aitkin (and of classical hypothesis testing) might respond at this point that, yes, everybody knows that changes are never exactly zero and that we should take a more “grown-up” view of the null hypothesis, not that the change is zero but that it is nearly zero. Unfortunately, the metaphorical interpretation of hypothesis tests has problems similar to the theological doctrines of the Unitarian church. Once you have abandoned literal belief in the Bible, the question soon arises: why follow it at all? Similarly, once one recognizes the inappropriateness of the point null hypothesis, it makes more sense not to try to rehabilitate it or treat it as treasured metaphor but rather to attack our statistical problems directly, in this case by performing inference on the change in opinion in the population.

          To be clear: we are not denying the value of hypothesis testing. In this example, we fi nd it completely reasonable to ask whether observed changes are statistically signifi cant, i.e. whether the data are consistent with a null hypothesis of zero change. What we do not find reasonable is the statement that “the question of interest is whether there has been a change in support.”

        • “Once you have abandoned literal belief in the Bible, the question soon arises: why follow it at all”? This is a very simplistic (and in this case irrelevant) example. Some parts of the Bible are literal and other are allegorical. Truth (or verisimilitude if you prefer) can be revealed both ways.

          You are objecting to the inappropriate use of classical hypothesis testing. I think everyone agrees with that.

        • George:

          That paragraph is a joke, of course. I realize that millions of people around the world are religious without having literal belief in the Bible, while others hold that every part of the Bible is divinely inspired. Nonetheless, I believe that my deeper point (as illustrated by the analogy to Unitarians) has validity.

        • OK. You are comparing a collection of stories (some of them are allegories/parables, other are literal/historical facts) with the use/misuse of a hypothesis test. Don’t see your deeper point other than the obvious one that context and interpretation are important in hypothesis testing.

    • I have the exact same view about Frequentists. They already see how to encode frequency information in probability distributions and if they only took one more step they’d see encode other kinds of information as well.

  2. Much damage has been done by the philosophical holdovers of logical positivism. When Prasanta says “some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical” he appears to contradict himself. The whole idea of the existence of “evidential relations” stems from the supposition that there is a (formal) logic for inference. Hanging on to a Bayesian logic of evidential relations in 2013 is a logical positivist holdover that blocks philosophers of science from getting beyond stale research programs.

  3. Off-topic: the results of the 2012 Putnam Exam are out, this is an extra data-point for the Unz debate. Three of the Putnam fellows are non-asian. Perhaps not surprisingly, all 3 of them have been on the US Math Olympiad team: Larson in 2007/2009 and Gunby/O’Dorney in 2010/2011. So in a sense this new data may not be terribly informative since all 3 have already been accounted for in the previous analyses.

  4. Let me congratulate you for your point #1. Exactly right! We get around subjectivity wherever agreement is achieved. Objectivity just is consensus, there being no other criterion. It is precisely why we O-Bayesians have advocated a single system for reference priors as far back as Jeffreys. With a unanimous pledge of concordance by all statistical practitioners, disagreement disappears, and with it the source of the most recalcitrant conflicts in data analysis. At my institution, we have a preferred system for “reference” priors, but we are prepared to comply with a single consensus Universal Prior System. Papers that violated the UPS accord, or failed to employ any priors, would not be published, unless some extremely rational argument could be given.

  5. “let me qualify the above by recognizing Deborah Mayo’s point that, in typical cases, the data model differs from the prior distribution by being more accessible to checking”

    I would love to see Mayo check that the probability of a given sequence of heads and tails in 100 coin flips is really (1/2)^100.

  6. Pingback: Friday links: transparency in research, and more | Dynamic Ecology

  7. I’ve now made it up to the end of section 2, and already there is an overwhelming impression that these authors really don’t understand the points of view they are critiqueing. It’s not at all clear that their different types of objective Bayesians are really different (either in the sense that one cannot simultaneously subscribe to more than one of these positions, or in the sense that adherents of these positions would have substantive disagreement on any issue). Table 2 (summarising the differences) is clearly just made up (only one camp uses decision theory? only one camp uses likelihood functions???).

    Perhaps the most obvious misunderstanding is that they are treating priors as if they are a different sort of thing from likelihood models. There is no indication that it has even occurred to the authors that, whatever a Bayesian modeller’s epistemic attitude is toward the likelihood function, their epistemic attitude toward the prior will typically be the same.

    The authors also seem to take for granted that everyone is agreed on which of the activities engaged in by scientists do or do not qualify to be called “scientific inference”. Thus they seem to think differences on questions such as whether scientific inference should produce unique answers are necessarily indicative of substantive philosophical differences rather than merely semantic differences. They seem unaware of the notion of well-posed questions and the idea that some accounts of scientific practice are restricted to these while others are not.

    I hope this is not typical of the quality of scholarship in the philosophical literature?

  8. Everybody apparently wants to stick the term “objective” on what they are doing because it’s good marketing, especially in science, but if people get serious about defining and enforcing objectivity it’s likely that they end up in a mess.
    Certainly the “strongly objective Bayesians” referred to above have a totally different concept of “objectivity” than both Gelman and Mayo, and it would probably be more helpful to replace this term by various different terms for different kinds of attitude that are explained in detail than to say that “we can still be objective without playing by the strong rules (given above)” without deconstructing the involved objectivity concept.

Comments are closed.