Jessy, Aki, and I write:
We review the Akaike, deviance, and Watanabe-Akaike information criteria from a Bayesian perspective, where the goal is to estimate expected out-of-sample-prediction error using a bias-corrected adjustment of within-sample error. We focus on the choices involved in setting up these measures, and we compare them in three simple examples, one theoretical and two applied. The contribution of this review is to put all these information criteria into a Bayesian predictive context and to better understand, through small examples, how these methods can apply in practice.
I like this paper. It came about as a result of preparing Chapter 7 for the new BDA. I had difficulty understanding AIC, DIC, WAIC, etc., but I recognized that these methods served a need. My first plan was to just apply DIC and WAIC on a couple of simple examples (a linear regression and the 8 schools) and leave it at that. But when I did the calculations, I couldn’t understand the results. Hence more effort working all these out in some simple examples, and further thought into the ultimate motivations for all these methods.
P.S. When introducing AIC, Akaike called it An Information Criterion. When introducing WAIC, Watanabe called it the Widely Applicable Information Criterion. Aki and I are hoping to come up with something called the Very Good Information Criterion.