Deborah Mayo sent me this quote from Jim Berger:
Too often I see people pretending to be subjectivists, and then using “weakly informative” priors that the objective Bayesian community knows are terrible and will give ridiculous answers; subjectivism is then being used as a shield to hide ignorance. . . . In my own more provocative moments, I claim that the only true subjectivists are the objective Bayesians, because they refuse to use subjectivism as a shield against criticism of sloppy pseudo-Bayesian practice.
This caught my attention because I’ve become more and more convinced that weakly informative priors are the right way to go in many different situations. I don’t think Berger was talking about me, though, as the above quote came from a publication in 2006, at which time I’d only started writing about weakly informative priors.
When prior information is weak, and the evidence from the data is relatively much stronger, then the data will dominate and . . . a weakly informative prior can be expected to give essentially the same posterior distribution as a more carefully considered prior distribution. The role of weakly informative priors is thus to provide approximations to a more meticulous Bayesian analysis.
This role is important, for two reasons. First, fully thought-through Bayesian analysis is a demanding task, so a quick and simple approximation is always welcome . . . Te second reason why this is important is that the situation of weak prior information is one where it is particularly difficult to formulate a genuine prior distribution carefully. . . .
For this reason, I [O’Hagan] use weakly informative priors liberally in my own Bayesian analyses. . . . But let me emphasise that I would never give to such analyses any of the interpretations of objectivity that Berger would apparently wish them to have. They are approximations to the analyses that I might be able to perform given more time and resources. . . . Everything we do in practice is an approximation in exactly this sense: there is nothing special about using weakly informative priors in this way.
I pretty much agree with O’Hagan here except that I’d go even further and say that in many cases it’s not clear what the correct fully informative model would be. Given the information available in any given problem, I think I would in many cases prefer a weakly informative prior to a full subjective prior even if I were able to construct such a thing.
In any case, Mayo asked for my comments on Berger’s paragraph, and here’s what I have to say:
The statistics literature is big enough that I assume there really is some bad stuff out there that Berger is reacting to, but I think that when he’s talking about weakly informative priors, Berger is not referring to the work in this area that I like, as I think of weakly informative priors as specifically being designed to give answers that are not “ridiculous.”
Keeping things unridiculous is what regularization’s all about, and one challenge of regularization (as compared to pure subjective priors) is that the answer to the question, What is a good regularizing prior?, will depend on the likelihood. There’s a lot of interesting theory and practice relating to weakly informative priors for regularization, a lot out there that goes beyond the idea of noninformativity.
To put it another way: We all know that there’s no such thing as a purely noninformative prior: any model conveys some information. But, more and more, I’m coming across applied problems where I wouldn’t want to be noninformative even if I could, problems where some weak prior information regularizes my inferences and keeps them sane and under control.
Finally, I think subjectivity and objectivity both are necessary parts of research. Science is objective in that it aims for reproducible findings that exist independent of the observer, and it’s subjective in that the process of science involves many individual choices. And I think the statistics I do (mostly, but not always, using Bayesian methods) is both objective and subjective in that way. That said, I think I see where Berger is coming from: objectivity is a goal we are aiming for, whereas subjectivity is an unavoidable weakness that we try to minimize. I think weakly informative priors are, or can be, as objective as many other statistical choices, such as assumptions of additivity, linearity, and symmetry, choices of functional forms such as in logistic regression, and so forth. I see no particular purity in fitting a model with unconstrained parameter space: to me, it is just as scientifically objective, if not more so, to restrict the space to reasonable values. It often turns out that soft constraints work better than hard constraints, hence the value of continuous and proper priors. I agree with Berger that objectivity is a desirable goal, and I think we can get closer to that goal by stating our assumptions clearly enough that they can be defended or contradicted by scientific theory and data—a position to which I expect Deborah Mayo would agree as well.
(More from Mayo and others at her blog.)