Skip to content

Getting confidence into the scaffolding – even if Bayes did or did not intend that.

After noticing an event for my first stats prof

I made the mistake of downloading one of his recent papers

After suggesting that Bayes might have actually been aiming at getting confidence intervals – the paper suggests “Bayes posterior calculations can appropriately be called quick and dirty” means to obtain confidence intervals.

It avoids obvious points of agreement “There are of course contexts where the true value of the parameter has come from a source with known distribution; in such cases the prior is real, it is objective, and could reasonably be considered to be a part of an enlarged model.”

Uses an intuitive way of explaining Bayes theorem that I think is helpful (at least in teaching) “The clear answer is in terms of what might have occurred given the same observational information: the picture is of many repetitions from the joint distribution giving pairs (y1; y2), followed by selection of pairs that have exact or approximate agreement y2 = y2.obs, and then followed by examining the pattern in the y1 values in the selected pairs. The pattern records what would have occurred for y1 among cases where y2 = y2.obs; the probabilities arise both from the density f(y1) and from the density f(y2|y1). Thus the initial pattern f(y1) when restricted to instances where y2 = y2.obs becomes modified to the pattern f(y1|y2.obs) = cf(y1)f(y2.obs|y1)”

And (with added brackets) makes a point I can’t disagree with “conditional calculations does not produce [relevant] probabilities from no [relevant] probabilities.

Perhaps this is very relevant to me as I am just wrapping up a consulation where a 1,000 plus intervals were calculated and the confidence ones were almost identical to the credible ones – except for a few with really sparse data where the credible intervals were obviously more sensible.

But the concordance bought me something – if only not to worry about the MCMC convergence. (By the way these computations were made almost easy and fully automated by Andrew’s R2WinBugs package.)

The devil is in the details (nothing gets things totally right) – or so I am confident.



  1. K? O'Rourke says:

    Thanks Andrew, had I been more careful, other obvious points of agreement would have been

    There are simple examples were confidence coverage is mathematically proven to be impossible (e.g. difference in two normal means with different unknown variances)

    In most practical applications confidence coverage is only approximate (e.g. difference in two proportion where the plot of coverage by p1=p2=p is not constant)

    With very sparse data confidence coverage is hopeless (ps the paper in your link above was already in my pile of papers to read for PK/PD)

    But when approx confidence coverage is obtainable – should it not be the default (i.e. back to the Gelman/Greenland debate)

    And is it important to know when the Bayes machinery will not give coverage?