At the end this article you wonder about consistency. Have you ever considered the possibility that utility might resolve some of the problems? I have no idea if it
would—I am not advocating that position—I just get some kind of intuition from phrases like “Judgment is required to decide…”. Perhaps there is a coherent and objective description of what is—or could be—done under a coherent “utility” model (like a utility that could be objectively agreed upon and computed). Utilities are usually subjective—true—but priors are usually subjective too.
I’m happy to think about utility, for some particular problem or class of problems going to the effort of assigning costs and benefits to different outcomes. I agree that a utility analysis, even if (necessarily) imperfect, can usefully focus discussion. For example, if a statistical method for selecting variables is justified on the basis of cost, I like the idea of attempting to quantify the costs of gathering and handling predictors, as compared to the costs of errors in predictions for new data.
But the problem of incoherence as discussed at the end of my article—that’s something different. Here I’m referring to two fundamental problems with Bayesian data analysis as I practice it:
1. I prefer continuous model expansion to discrete model averaging—but the former can be seen as just a limiting case of the latter. So really I need a better understanding of what sorts of model expansions work well and what sorts run into trouble. From a Bayesian perspective, the trouble typically arises from the joint prior distribution over the larger, expanded space. Default choices such as prior independence often create problems that were not so obvious when the model was set up.
2. My procedure of model building, inference, and model checking requires outside human intervention. How could a computer do it, if you wanted to program a computer to do Bayesian data analysis? How can our brains do anything approximating Bayesian data analysis? Neither the computer nor the brain has a “homunculus” that can sit outside, make graphs, and do posterior predictive checks. I don’t have a great answer to this right now, but I suspect that the natural or artificial intelligence actually would need some external module to check model fit. This connects to the familiar “aha” feeling and to the fractal nature of scientific revolutions.