“The difference between a fact and an opinion for purposes of decision making and inference is that when I use opinions, I get uncomfortable. I

am not too uncomfortable with the opinion that error terms are normally distributed because most econometricians make use of that assumption.

This observation has deluded me into thinking that the opinion that error terms are normal may be a fact, when I know deep inside that normal

distributions are actually used only for convenience. In contrast, I am quite uncomfortable using a prior distribution, mostly I suspect because hardly anyone uses them. If convenient prior distributions were used as often as convenient sampling distributions, I suspect that I could be as

easily deluded into thinking that prior distributions are facts as I have been into thinking that sampling distributions are facts.”

“In this paper, we focus on quantifying the sensitivity of posterior means to perturbations of the prior”. I don’t think this indicates seriously incorrect understanding on their part.

]]>The reason is the next sentence says “One hopes that the posterior is robust to reasonable variation in the choice of prior, since this choice is made by the modeler and is often somewhat subjective”

the phrase “this choice is made by the modeler” referring to the choice of prior implies pretty strongly that the author understands that choice of the likelihood *isn’t* made by the modeler.

That is a seriously incorrect understanding.

Every part of the Bayesian model is chosen by the modeler to express the modeler’s understanding of what’s going on. The likelihood is a choice, whether you leave that choice to a default or you actually consciously think about it. There *is no* objectively “correct” likelihood (unless you’re talking about a simulated data situation with a random number generator).

]]>There are lots of checks for models; I’ve written several papers on the topic too! But for the models that I’ve worked on, “plotting the empirical data distribution” doesn’t do much: my data are typically binary!

]]>> think that we already have good tools for checking the data model but not so the prior

That was my view but Mike Evans raised enough doubt about that for me to no longer be so sure.

But I have fallen behind on reading his work – Checking for prior-data conflict using prior to posterior divergences https://arxiv.org/pdf/1611.00113v3.pdf

]]>If there’s any equivalent way to check the prior, I’m not aware of it. My understanding of the situation is that people think that we already have good tools for checking the data model but not so the prior, and thus they treat model-checking as a given and focus on the unsolved problem.

]]>The authors write: “One hopes that the posterior is robust to reasonable variation in the choice of prior, since this choice is made by the modeler and is often somewhat subjective.” My point is that the choice of data model is also important and also subjective. I do not think it makes sense to single out the “prior” part of the model for robustness checking. This is a mistake I’ve seen a lot in the Bayesian literature, to see the prior as something to be concerned about and to accept the data model without question.

]]>The alternative interpretation “posterior follows from (the data) and (a choice of a prior) and (a likelihood)” doesn’t sound right to me (but I’m not a native English speaker, so I might be wrong). ]]>

Yes!

]]>