Aki Vehtari and Janne Ojanen just published a long paper that begins:
To date, several methods exist in the statistical literature for model assessment, which purport themselves specifically as Bayesian predictive methods. The decision theoretic assumptions on which these methods are based are not always clearly stated in the original articles, however. The aim of this survey is to provide a unified review of Bayesian predictive model assessment and selection methods, and of methods closely related to them. We review the various assumptions that are made in this context and discuss the connections between different approaches, with an emphasis on how each method approximates the expected utility of using a Bayesian model for the purpose of predicting future data.
AIC (which Akaike called “An Information Criterion”) is the starting point for all these methods. More recently, Watanabe came up with WAIC (which he called the “Widely Available Information Criterion”). In between there was DIC which has some Bayesian aspects but does not fully average over the posterior distribution.
I still dream of coming up with something with Vehtari and calling it the Very Good Information Criterion. But I don’t think it’s gonna happen. The tradition in this area has been to come up with clever, computable formula and then hope it does everything we want. Vehtari and Ojanen do it slightly differently by asking more clearly what the goals are. If the goal is some sort of predictive error than it turns out that there is no magic formula. In fact, it’s not even clear what the goal is. It’s easy to come up with examples where the relevant out-of-sample predictive error can be defined in different, incompatible ways. One valuable aspect of the Vehtari and Ojanen paper is that they explicitly discuss these different goals rather than assuming or implying that a single measure will tell the whole story.