Skip to content
 

The Efron transition? And the wit and wisdom of our statistical elders

Stephen Martin writes:

Brad Efron seems to have transitioned from “Bayes just isn’t as practical” to “Bayes can be useful, but EB is easier” to “Yes, Bayes should be used in the modern day” pretty continuously across three decades.

http://www2.stat.duke.edu/courses/Spring10/sta122/Handouts/EfronWhyEveryone.pdf
http://projecteuclid.org/download/pdf_1/euclid.ss/1028905930
http://statweb.stanford.edu/~ckirby/brad/other/2009Future.pdf

Also, Lindley’s comment in the first article is just GOLD:
“The last example with [lambda = theta_1theta_2] is typical of a sampling theorist’s impractical discussions. It is full of Greek letters, as if this unusual alphabet was a repository of truth.” To which Efron responded “Finally, I must warn Professor Lindley that his brutal, and largely unprovoked, attack has been reported to FOGA (Friends of the Greek Alphabet). He will be in for a very nasty time indeed if he wishes to use as much as an epsilon or an iota in any future manuscript.”

“Perhaps the author has been falling over all those bootstraps lying around.”

“What most statisticians have is a parody of the Bayesian argument, a simplistic view that just adds a woolly prior to the sampling-theory paraphernalia. They look at the parody, see how absurd it is, and thus dismiss the coherent approach as well.”

I pointed Stephen to this post and this article (in particular the bottom of page 295). Also this, I suppose.

4 Comments

  1. conjugateprior says:

    Even when he’s mistaken, Lindley burns are the best burns: From Lindley (2000) “Efron worries about the likelihood principle, which is not surprising when the bootstrap has no likelihood about which to have a principle”.

  2. Tim says:

    What would be a good link to the “coherent argument” that the final comment is referencing? It seems like something more specific than general bayes (too big a category), so I was wondering what the context was.

    • Stephen Martin says:

      Generally when I see someone write about the coherence of the Bayesian approach, they are referring to the fact that Bayesian modeling, estimation, and inference all come from one thing: Bayes theorem. It’s all glued together by Bayes theorem and the posterior probability distribution. Or more simply, it’s all based in probability.

      In contrast, NHST is generally not coherent in this way. Estimation may be performed using ML (or several variants), some minimum loss function, etc. Inference is then performed separately using analytically derived sampling distributions, bootstrapped sampling distributions, model comparison with some loss function, etc.

      With Bayes, you specify one thing: The posterior distribution. From that, you get your estimates and your inference. You specify the model probabilistically, you obtain expected values or modal estimates of the posterior, you make your inference from the posterior. Parameter estimates? Bayes theorem. Model comparison? Bayes theorem. Inference? Bayes theorem.
      This can be good or bad; on one hand, that means you can’t hotswap any one of these components for something else, whereas within the frequentist paradigm you can swap out how you estimate and how you make inferences pretty willy-nilly. On the other hand, this is a good thing, because it’s all driven by one statistical kernel: Probability theory.

      With regard to this particular paper, I believe Efron was saying that the Bayesian estimate would be very different from a bootstrapped estimate, largely due to overfitting. But he didn’t really discuss priors. Lindley was just saying, I think, that if you think some estimate is ridiculous, then your prior should express that beforehand, but because the prior wasn’t even mentioned, it’s using an incoherent version of bayes (likelihood-only, no priors really), then calling it incoherent. “Coherent bayes”, in this case, would have and use a prior.

  3. Charlie Williams says:

    Ironically*, Efron’s most pro-Bayes essay might have been improved with more discussion of his priors on Bayes, Fisher, NPW…

    *usage correct? Alanis Morisette and subsequent brouhaha has made me permanently paranoid about this.

Leave a Reply