Jaynes is no guru

E. T. Jaynes was a physicist who applied Bayesian inference to problems in statistical mechanics and signal processing. He was an excellent writer with a dramatic style, and some of his work inspired me greatly. In particular, I like his approach of assuming a strong model and then fixing it when it does not fit the data. (This sounds obvious, but the standard Bayesian methodology of 20 years ago did not allow for this.) I don’t think Jaynes ever stated this principle explicitly but he followed it in his examples. I remember one example of the probability of getting 1,2,3,4,5,6 on a roll of a die, where he discussed how various imperfections of the die would move you away from a uniform distribution. It was an interesting example because he didn’t just try to fit the data; rather, he used model misfit as information to learn more about the physical system under study.

That said, I think there’s an unfortunate tendency among some physicists and others to think of Jaynes as a guru and to think his pronouncements are always correct. (See the offhand mentions here, for example.) I’d draw an analogy to another Ed: I’m thinking here of Tufte, who made huge contributions in statistical graphics and also has a charismatic, oracular style of writing. Anyway, back to Jaynes: I firmly believe that much of one’s statistical tastes are formed by exposures to particular applications, and I could imagine that Jaynes’s methods worked particularly well for his problems but wouldn’t directly apply, for example, to data analyses in economics and political science. The general principles still hold—certainly, our modeling advice starting on page 3 of Bayesian Data Analysis is inspired by Jaynes as well as other predecessors—but I wouldn’t treat his specific words (or anyone else’s, including ours) as gospel.

3 thoughts on “Jaynes is no guru

  1. I have the collected volume Jaynes' papers (edited by Rosencrantz). He was certainly a stimulating and entertaining writer. The whole objective Bayesian approach he developed is based on recognizing when a statistical problem has a lot of symmetry, and then respecting that symmetry in your model. In certain fields (statistical physics, deconvolution of blurred images) this approach seems to be enough to allow valid statistical inference from a very parsimonious model. I can't see it as a universal principle, however, because there are a lot of problems that don't have symmetry.

  2. If a problem doesn't have symmetry, then don't use a symmetrical prior distributions. There is nothing in Jaynes work, or Bayesian probability theory, that requires symmetry. Bayesian probability theory does require you quantitatively assign what you do know about the problem in prior probability distributions. If you know these distributions should be asymmetrical, then that's the prior information you better use (garbage in – garbage out).

Comments are closed.