We have no fireworks-related posts for July 4th but at least we have an item that’s appropriate for the summer weather. It comes from Daniel Lakeland, who writes: Recently in one of your blog posts (“priors I don’t believe”) there was a discussion in which I was advocating the use of dimensional analysis and dimensionless […]

**Bayesian Statistics**category.

## “The great advantage of the model-based over the ad hoc approach, it seems to me, is that at any given time we know what we are doing.”

The quote is from George Box, 1979. And this: Please can Data Analysts get themselves together again and become whole Statisticians before it is too late? Before they, their employers, and their clients forget the other equally important parts of the job statisticians should be doing, such as designing investigations and building models? I actually […]

## “Being an informed Bayesian: Assessing prior informativeness and priorâ€“likelihood conflict”

Xiao-Li Meng sends along this paper (coauthored with Matthew Reimherr and Dan Nicolae), which begins: Dramatically expanded routine adoption of the Bayesian approach has substantially increased the need to assess both the confirmatory and contradictory information in our prior distribution with regard to the information provided by our likelihood function. We propose a diagnostic approach […]

## Useless Algebra, Inefficient Computation, and Opaque Model Specifications

I (Bob, not Andrew) doubt anyone sets out to do algebra for the fun of it, implement an inefficient algorithm, or write a paper where it’s not clear what the model is. But… Why not write it in BUGS or Stan? Over on the Stan users group, Robert Grant wrote Hello everybody, I’ve just been […]

## Comment of the week

This one, from DominikM: Really great, the simple random intercept – random slope mixed model I did yesterday now runs at least an order of magnitude faster after installing RStan 2.3 this morning. You are doing an awesome job, thanks a lot!

## Quantifying luck vs. skill in sports

Trey Causey writes: If you’ll permit a bit of a diversion, I was wondering if you’d mind sharing your thoughts on how sabermetrics approaches the measurement of luck vs. skill. Phil Birnbaum and Tom Tango use the following method (which I’ve quoted below). It seems to embody the innovative but often non-intuitive way that sabermetrics […]

## (Py, R, Cmd) Stan 2.3 Released

We’re happy to announce RStan, PyStan and CmdStan 2.3. Instructions on how to install at: http://mc-stan.org/ As always, let us know if you’re having problems or have comments or suggestions. We’re hoping to roll out the next release a bit quicker this time, because we have lots of good new features that are almost ready […]

## Combining forecasts: Evidence on the relative accuracy of the simple average and Bayesian model averaging for predicting social science problems

Andreas Graefe sends along this paper (with Helmut Kuchenhoff, Veronika Stierle, and Bernhard Riedl) and writes: We summarize prior evidence from the field of economic forecasting and find that the simple average was more accurate than Bayesian model averaging in three of four studies; on average, the error of BMA was 6% higher than the […]

## Judicious Bayesian Analysis to Get Frequentist Confidence Intervals

Christian Bartels has a new paper, “Efficient generic integration algorithm to determine confidence intervals and p-values for hypothesis testing,” of which he writes: The paper proposes to do an analysis of observed data which may be characterized as doing a judicious Bayesian analysis of the data resulting in the determination of exact frequentist p-values and […]

## Average predictive comparisons in R: David Chudzicki writes a package!

Here it is: An R Package for Understanding Arbitrary Complex Models As complex models become widely used, it’s more important than ever to have ways of understanding them. Even when a model is built primarily for prediction (rather than primarily as an aid to understanding), we still need to know what it’s telling us. For […]