^*Footnote: In ideal case we would like to use KL(p||q) as it measures the information lost by approximating p with q, but we may use other divergences either because they are easier to compute or because they may be less sensitive to misspecifications.

]]>But somewhat more assurance than say feeling financially secure given two lottery tickets in separate lotteries ;-)

On the other hand, one could speculate that if sampling from posteriors in wide generality became direct, then future statisticians would not need to learn (much) more than high school math.

]]>Figure_2_linear_reg.R line 57 `I=length(tel_vec)` –> `I=length(tol_vec)` ]]>