Bayes and parsimony

Check out Peter Grunwald’s long comment on this entry on Bayes and parsimony. He has some interesting things to say, most notably in cautioning that Bayesian inference will not necessarily get you to the true model, or even close to the true model, even with large sample sizes. I don’t have much to add to his thoughts, except to note that he seems to be working with discrete models, and I usually work with continuous models. I know that the distinction between discrete and continuous is somewhat arbitrary (for example, I use logistic regression for binary data, thus using a continuous model for discrete data). Nonetheless, I suspect that some of his coding ideas are particularly appropriate for discrete models, and some of my hierarchical modeling ideas make the most sense for continuous models. In particular, when fitting a continuous model, I see no advantage (beyond the very real issues of cost, computation, and storage) to zeroing out a variable rather than just shrinking it. But in a discrete model, I can imagine that such “quantum” phenomena occur in fitting. I’m curious what Radford thinks.

(I like Peter Grunwald, without meeting him, because according to his webpage he has a daughter named Wiske, which reminds me of Suske and Wiske, names which I like the sound of.)