Statisticians take tours in other people’s data. All methods of statistical inference rest on statistical models. Experiments typically have problems with compliance, measurement error, generalizability to the real world, and representativeness of the sample. Surveys typically have problems of undercoverage, nonresponse, and measurement error. Real surveys are done to learn about the general population. But […]

**Bayesian Statistics**category.

## Why we hate stepwise regression

Haynes Goddard writes: I have been slowly working my way through the grad program in stats here, and the latest course was a biostats course on categorical and survival analysis. I noticed in the semi-parametric and parametric material (Wang and Lee is the text) that they use stepwise regression a lot. I learned in econometrics […]

## Bayesian nonparametric weighted sampling inference

Yajuan Si, Natesh Pillai, and I write: It has historically been a challenge to perform Bayesian inference in a design-based survey context. The present paper develops a Bayesian model for sampling inference using inverse-probability weights. We use a hierarchical approach in which we model the distribution of the weights of the nonsampled units in the […]

## WAIC and cross-validation in Stan!

Aki and I write: The Watanabe-Akaike information criterion (WAIC) and cross-validation are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model. WAIC is based on the series expansion of leave-one-out cross-validation (LOO), and asymptotically they are equal. With finite data, WAIC and cross-validation address different predictive questions and thus it is useful […]

## Models with constraints

I had an interesting conversation with Aki about monotonicity constraints. We were discussing a particular set of Gaussian processes that we were fitting to the arsenic well-switching data (the example from the logistic regression chapter in my book with Jennifer) but some more general issues arose that I thought might interest you. The idea was […]

## Thermodynamic Monte Carlo: Michael Betancourt’s new method for simulating from difficult distributions and evaluating normalizing constants

I hate to keep bumping our scheduled posts but this is just too important and too exciting to wait. So it’s time to jump the queue. The news is a paper from Michael Betancourt that presents a super-cool new way to compute normalizing constants: A common strategy for inference in complex models is the relaxation […]

## Forum in *Ecology* on p-values and model selection

There’s a special issue of the journal (vol. 95, no. 3) featuring several papers on p-values. There’s also a discussion that I wrote, which does not appear in the journal (for reasons explained below) but which I extract and link to below. First, the papers in the special section: P values, hypothesis testing, and model […]

## “The results (not shown) . . .”

Pro tip: Don’t believe any claims about results not shown in a paper. Even if the paper has been published. Even if it’s been cited hundreds of times. If the results aren’t shown, they haven’t been checked. I learned this the hard way after receiving this note from Bin Liu, who wrote: Today I saw […]

## Priors I don’t believe

Biostatistician Jeff Leek writes: Think about this headline: “Hospital checklist cut infections, saved lives.” I [Leek] am a pretty skeptical person, so I’m a little surprised that a checklist could really save lives. I say the odds of this being true are 1 in 4. I’m actually surprised that he’s surprised, since over the years […]

## Stan (& JAGS) Tutorial on Linear Mixed Models

Shravan Vasishth sent me an earlier draft of this tutorial he co-authored with Tanner Sorensen. I liked it, asked if I could blog about it, and in response, they’ve put together a convenient web page with links to the tutorial PDF, JAGS and Stan programs, and data: Fitting linear mixed models using JAGS and Stan: […]