Skip to content
Archive of posts filed under the Bayesian Statistics category.

Pro Publica’s new Surgeon Scorecards

Skyler Johnson writes: You should definitely weigh in on this… Pro Publica created “Surgeon Scorecards” based upon risk adjusted surgery compilation rates. They used hierarchical modeling via the lmer package in R. For detailed methodology, click the methodology “how we calculated complications” link, then atop that next page click on the detailed methodology to download […]

BREAKING . . . Kit Harrington’s height

Rasmus “ticket to” Bååth writes: I heeded your call to construct a Stan model of the height of Kit “Snow” Harrington. The response on Gawker has been poor, unfortunately, but here it is, anyway. Yeah, I think the people at Gawker have bigger things to worry about this week. . . . Here’s Rasmus’s inference […]

Measurement is part of design

The other day, in the context of a discussion of an article from 1972, I remarked that the great statistician William Cochran, when writing on observational studies, wrote almost nothing about causality, nor did he mention selection or meta-analysis. It was interesting that these topics, which are central to any modern discussion of observational studies, […]

New papers on LOO/WAIC and Stan

Aki, Jonah, and I have released the much-discussed paper on LOO and WAIC in Stan: Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models. We (that is, Aki) now recommend LOO rather than WAIC, especially now that we have an R function to quickly compute LOO using Pareto smoothed importance sampling. In […]

Prior information, not prior belief

The prior distribution p(theta) in a Bayesian analysis is often presented as a researcher’s beliefs about theta. I prefer to think of p(theta) as an expression of information about theta. Consider this sort of question that a classically-trained statistician asked me the other day: If two Bayesians are given the same data, they will come […]

Don’t do the Wilcoxon

The Wilcoxon test is a nonparametric rank-based test for comparing two groups. It’s a cool idea because, if data are continuous and there is no possibility of a tie, the reference distribution depends only on the sample size. There are no nuisance parameters, and the distribution can be tabulated. From a Bayesian point of view, […]

Short course on Bayesian data analysis and Stan 19-21 July in NYC!

Bob Carpenter, Daniel Lee, and I are giving a 3-day short course in two weeks. Before class everyone should install R, RStudio and RStan on their computers. If problems occur please join the stan-users group and post any questions. It’s important that all participants get Stan running and bring their laptops to the course. Class […]

“Why should anyone believe that? Why does it make sense to model a series of astronomical events as though they were spins of a roulette wheel in Vegas?”

Deborah Mayo points us to a post by Stephen Senn discussing various aspects of induction and statistics, including the famous example of estimating the probability the sun will rise tomorrow. Senn correctly slams a journalistic account of the math problem: The canonical example is to imagine that a precocious newborn observes his first sunset, and […]

Where does Mister P draw the line?

Bill Harris writes: Mr. P is pretty impressive, but I’m not sure how far to push him in particular and MLM [multilevel modeling] in general. Mr. P and MLM certainly seem to do well with problems such as eight schools, radon, or the Xbox survey. In those cases, one can make reasonable claims that the […]

Interpreting posterior probabilities in the context of weakly informative priors

Nathan Lemoine writes: I’m an ecologist, and I typically work with small sample sizes from field experiments, which have highly variable data. I analyze almost all of my data now using hierarchical models, but I’ve been wondering about my interpretation of the posterior distributions. I’ve read your blog, several of your papers (Gelman and Weakliem, […]

How tall is Kit Harrington? Stan wants to know.

We interrupt our regularly scheduled programming for a special announcement. Madeleine Davies writes: “Here are some photos of Kit Harington. Do you know how tall he is?” I’m reminded, of course, of our discussion of the height of professional tall person Jon Lee Anderson: Full Bayes, please. I can’t promise publication on Gawker, but I’ll […]

“Best Linear Unbiased Prediction” is exactly like the Holy Roman Empire

Dan Gianola pointed me to this article, “One Hundred Years of Statistical Developments in Animal Breeding,” coauthored with Guilherme Rosa, which begins: Statistical methodology has played a key role in scientific animal breeding. Approximately one hundred years of statistical developments in animal breeding are reviewed. Some of the scientific foundations of the field are discussed, […]

The posterior distribution of the likelihood ratio as a summary of evidence

Gabriel Marinello writes: I am a PhD student in Astrophysics and am writing this email to you because an enquiry about point null hypothesis testing (H0: Theta = Theta0 and H1: Theta != Theta0) in a bayesian context and I think that your pragmatic stance would be helpful. In Astrophysics is not rare to find […]

A quick one

Fabio Rojas asks: Should I do Bonferroni adjustments? Pros? Cons? Do you have a blog post on this? Most social scientists don’t seem to be aware of this issue. My short answer is that if you’re fitting mutlilevel models, I don’t think you need multiple comparisons adjustments; see here.

Cross-validation != magic

In a post entitled “A subtle way to over-fit,” John Cook writes: If you train a model on a set of data, it should fit that data well. The hope, however, is that it will fit a new set of data well. So in machine learning and statistics, people split their data into two parts. […]

Bayesian inference: The advantages and the risks

This came up in an email exchange regarding a plan to come up with and evaluate Bayesian prediction algorithms for a medical application: I would not refer to the existing prediction algorithm as frequentist. Frequentist refers to the evaluation of statistical procedures but it doesn’t really say where the estimate or prediction comes from. Rather, […]

New Alan Turing preprint on Arxiv!

Dan Kahan writes: I know you are on 30-day delay, but since the blog version of you will be talking about Bayesian inference in couple of hours, you might like to look at paper by Turing, who is on 70-yr delay thanks to British declassification system, who addresses the utility of using likelihood ratios for […]

“Do we have any recommendations for priors for student_t’s degrees of freedom parameter?”

In response to the above question, Aki writes: I recommend as an easy default option real nu; nu ~ gamma(2,0.1); This was proposed and anlysed by Juárez and Steel (2010) (Model-based clustering of non-Gaussian panel data based on skew-t distributions. Journal of Business & Economic Statistics 28, 52–66.). Juárez and Steel compere this to Jeffreys […]

My talk at MIT this Thursday

When I was a student at MIT, there was no statistics department. I took a statistics course from Stephan Morgenthaler and liked it. (I’d already taken probability and stochastic processes back at the University of Maryland; my instructor in the latter class was Prof. Grace Yang, who was super-nice. I couldn’t follow half of what […]

There’s something about humans

An interesting point came up recently. In the abstract to my psychology talk, I’d raised the question: If we can’t trust p-values, does experimental science involving human variation just have to start over? In the comments, Rahul wrote: Isn’t the qualifier about human variation redundant? If we cannot trust p-values we cannot trust p-values. My […]