Skip to content
Archive of posts filed under the Bayesian Statistics category.

Quantifying luck vs. skill in sports

Trey Causey writes: If you’ll permit a bit of a diversion, I was wondering if you’d mind sharing your thoughts on how sabermetrics approaches the measurement of luck vs. skill. Phil Birnbaum and Tom Tango use the following method (which I’ve quoted below). It seems to embody the innovative but often non-intuitive way that sabermetrics […]

(Py, R, Cmd) Stan 2.3 Released

We’re happy to announce RStan, PyStan and CmdStan 2.3. Instructions on how to install at: http://mc-stan.org/ As always, let us know if you’re having problems or have comments or suggestions. We’re hoping to roll out the next release a bit quicker this time, because we have lots of good new features that are almost ready […]

Combining forecasts: Evidence on the relative accuracy of the simple average and Bayesian model averaging for predicting social science problems

Andreas Graefe sends along this paper (with Helmut Kuchenhoff, Veronika Stierle, and Bernhard Riedl) and writes: We summarize prior evidence from the field of economic forecasting and find that the simple average was more accurate than Bayesian model averaging in three of four studies; on average, the error of BMA was 6% higher than the […]

Judicious Bayesian Analysis to Get Frequentist Confidence Intervals

Christian Bartels has a new paper, “Efficient generic integration algorithm to determine confidence intervals and p-values for hypothesis testing,” of which he writes: The paper proposes to do an analysis of observed data which may be characterized as doing a judicious Bayesian analysis of the data resulting in the determination of exact frequentist p-values and […]

Average predictive comparisons in R: David Chudzicki writes a package!

Here it is: An R Package for Understanding Arbitrary Complex Models As complex models become widely used, it’s more important than ever to have ways of understanding them. Even when a model is built primarily for prediction (rather than primarily as an aid to understanding), we still need to know what it’s telling us. For […]

Comparing the full model to the partial model

Pat Lawlor writes: We are writing with a question about model comparison and fitting. We work in a group at Northwestern that does neural data analysis and modeling, and often would like to compare full models (e.g. neurons care about movement and vision) with various partial models (e.g. they only care about movement). We often […]

Stan is Turing Complete. So what?

This post is by Bob Carpenter. Stan is Turing complete! There seems to a persistent misconception that Stan isn’t Turing complete.1, 2 My guess is that it stems from Stan’s (not coincidental) superficial similarity to BUGS and JAGS, which provide directed graphical model specification languages. Stan’s Turing completeness follows from its support of array data […]

Bayes in the research conversation

Charlie Williams writes: As I get interested in Bayesian approaches to statistics, I have one question I wondered if you would find interesting to address at some point on the blog. What does Bayesian work look like in action across a field? From experience, I have some feeling for how ongoing debates evolve (or not) […]

Regression and causality and variable ordering

Bill Harris wrote in with a question: David Hogg points out in one of his general articles on data modeling that regression assumptions require one to put the variable with the highest variance in the ‘y’ position and the variable you know best (lowest variance) in the ‘x’ position. As he points out, others speak […]

Identifying pathways for managing multiple disturbances to limit plant invasions

Andrew Tanentzap, William Lee, Adrian Monks, Kate Ladley, Peter Johnson, Geoffrey Rogers, Joy Comrie, Dean Clarke, and Ella Hayman write: We tested a multivariate hypothesis about the causal mechanisms underlying plant invasions in an ephemeral wetland in South Island, New Zealand to inform management of this biodiverse but globally imperilled habitat. . . . We […]