Bayes and doomsday

Ben O’Neill writes:

I am a fellow Bayesian statistician at the University of New South Wales (Australia).  I have enjoyed reading your various books and articles, and enjoyed reading your recent article on The Perceived Absurdity of Bayesian Inference.  However, I disagree with your assertion that the “doomsday argument” is non-Bayesian; I think if you read how it is presented by Leslie you will see that it is at least an attempt at a Bayesian argument.  In any case, although it has enough prima facie plausibility to trick people, the argument is badly flawed, and not a correct application of Bayesian reasoning.  I don’t think it is a noose around the Bayesian neck.

Anyway, I’m just writing because I thought you might be interested in a recent paper on this topic in the Journal of Philosophy.  The paper is essentially a Bayesian refutation of the doomsday argument, pointing out how it goes wrong, and how it is an incorrect application of Bayesian inference.  (And also how a correct application of Bayesian inference leads to sensible conclusions.)  Essentially, the argument confuses total series length with remaining series length, and sneaks information from the data into the prior in a way which is invalid.  Once this is corrected the absurd conclusions of the doomsday argument evaporate.

I don’t really have anything more to say on this topic (here’s my argument from 2005 as to why I think the doomsday argument is clearly frequentist and not particularly Bayesian) but I thought some of you might be interested, hence the pointer.

2 thoughts on “Bayes and doomsday

  1. It’s Bayesian to think probabilities are always conditional on something. Depending on what that something is, the probability can be large or small. Initial calculations by physicists indicated the Sun would probably burn itself out in a few hundreds years. Then they discovered fusion and now physicists say it probably wont burn out for a few billion years.

    If we don’t go Bayes we’re probably doomed.

  2. I’m late to the party, but wanted to pipe up on an unrelated point from that paper. You wrote: “It does not help that many Bayesians over the years have muddied the waters by describing parameters as random rather than fixed.” This strikes me as a very surprising thing to say.

    The term “random” is best avoided entirely, because it is often unclear whether it is used to mean nondeterministic/varying (the usual frequentist sense) or unpredictable/unknown (the usual Bayesian sense). The wide range of usage and the lack of clarity about what this word refers to is evident from the wikipedia entry: http://en.wikipedia.org/wiki/Randomness

    One reason the word has different meanings in different contexts is presumably the fact that many Bayesian analyses can be “translated” into acceptable frequentist form by simply invoking the phrase “treat the parameter as a random variable”. The usefulness of this “translation”, and the irrelevance of the frequentist notion of randomness in the Bayesian context, justifies the different usage. For the same reason, we also have different usages of the word “parameter”.

    So in a Bayesian context, “random” does not mean the opposite of “fixed”. Parameters are (usually) random _and_ (often) fixed. And the sentence “It is the knowledge about these unknowns that Bayesians model as random” strikes me as completely meaningless.

Leave a Reply

Your email address will not be published. Required fields are marked *