Skip to content
Archive of posts filed under the Bayesian Statistics category.

How to interpret “p = .06” in situations where you really really want the treatment to work?

We’ve spent a lot of time during the past few years discussing the difficulty of interpreting “p less than .05” results from noisy studies. Standard practice is to just take the point estimate and confidence interval, but this is in general wrong in that it overestimates effect size (type M error) and can get the […]

A completely reasonable-sounding statement with which I strongly disagree

From a couple years ago: In the context of a listserv discussion about replication in psychology experiments, someone wrote: The current best estimate of the effect size is somewhere in between the original study and the replication’s reported value. This conciliatory, split-the-difference statement sounds reasonable, and it might well represent good politics in the context […]

“This is why FDA doesn’t like Bayes—strong prior and few data points and you can get anything”

[cat picture] In the context of a statistical application, someone wrote: Since data is retrospective I had to use informative prior. The fit of urine improved significantly (very good) without really affecting concentration. This is why FDA doesn’t like Bayes—strong prior and few data points and you can get anything. Hopefully in this case I […]

Prior information, not prior belief

From a couple years ago: The prior distribution p(theta) in a Bayesian analysis is often presented as a researcher’s beliefs about theta. I prefer to think of p(theta) as an expression of information about theta. Consider this sort of question that a classically-trained statistician asked me the other day: If two Bayesians are given the […]

Update rstanarm to version 2.15.3

Ben Goodrich writes: We just released rstanarm 2.15.3, which fixed a major bug that was introduced back in January with the 2.14.1 release where models of the form stan_glmer(y ~ … + (1 | group1) + (1 | group2), family = binomial()) would produce WRONG RESULTS. This only applies to Bernoulli models with multiple group-specific […]

Prior choice recommendations wiki !

Here’s the wiki, and here’s the background: Our statistical models are imperfect compared to the true data generating process and our complete state of knowledge (from an informational-Bayesian perspective) or the set of problems over which we wish to average our inferences (from a population-Bayesian or frequentist perspective). The practical question here is what model […]

Using prior knowledge in frequentist tests

Christian Bartels send along this paper, which he described as an attempt to use informative priors for frequentist test statistics. I replied: I’ve not tried to follow the details but this reminds me of our paper on posterior predictive checks. People think of this as very Bayesian but my original idea when doing this research […]

Stan in St. Louis this Friday

This Friday afternoon I (Jonah) will be speaking about Stan at Washington University in St. Louis. The talk is open to the public, so anyone in the St. Louis area who is interested in Stan is welcome to attend. Here are the details: Title: Stan: A Software Ecosystem for Modern Bayesian Inference Jonah Sol Gabry, […]

Stan without frontiers, Bayes without tears

[cat picture] This recent comment thread reminds me of a question that comes up from time to time, which is how to teach Bayesian statistics to students who aren’t comfortable with calculus. For continuous models, probabilities are integrals. And in just about every example except the one at 47:16 of this video, there are multiple […]

Representists versus Propertyists: RabbitDucks – being good for what?

It is not that unusual in statistics to get the same statistical output (uncertainty interval, estimate, tail probability,etc.) for every sample, or some samples or the same distribution of outputs or the same expectations of outputs or just close enough expectations of outputs. Then, I would argue one has a variation on a DuckRabbit. In […]

The Efron transition? And the wit and wisdom of our statistical elders

[cat picture] Stephen Martin writes: Brad Efron seems to have transitioned from “Bayes just isn’t as practical” to “Bayes can be useful, but EB is easier” to “Yes, Bayes should be used in the modern day” pretty continuously across three decades. http://www2.stat.duke.edu/courses/Spring10/sta122/Handouts/EfronWhyEveryone.pdf http://projecteuclid.org/download/pdf_1/euclid.ss/1028905930 http://statweb.stanford.edu/~ckirby/brad/other/2009Future.pdf Also, Lindley’s comment in the first article is just GOLD: “The […]

Causal inference conference at Columbia University on Sat 6 May: Varying Treatment Effects

Hey! We’re throwing a conference: Varying Treatment Effects The literature on causal inference focuses on estimating average effects, but the very notion of an “average effect” acknowledges variation. Relevant buzzwords are treatment interactions, situational effects, and personalized medicine. In this one-day conference we shall focus on varying effects in social science and policy research, with […]

Bayesian Posteriors are Calibrated by Definition

Time to get positive. I was asking Andrew whether it’s true that I have the right coverage in Bayesian posterior intervals if I generate the parameters from the prior and the data from the parameters. He replied that yes indeed that is true, and directed me to: Cook, S.R., Gelman, A. and Rubin, D.B. 2006. […]

Stacking, pseudo-BMA, and AIC type weights for combining Bayesian predictive distributions

This post is by Aki. We have often been asked in the Stan user forum how to do model combination for Stan models. Bayesian model averaging (BMA) by computing marginal likelihoods is challenging in theory and even more challenging in practice using only the MCMC samples obtained from the full model posteriors. Some users have […]

Beyond subjective and objective in statistics: my talk with Christian Hennig tomorrow (Wed) 5pm in London

Christian Hennig and I write: Decisions in statistical data analysis are often justified, criticized, or avoided using concepts of objectivity and subjectivity. We argue that the words “objective” and “subjective” in statistics discourse are used in a mostly unhelpful way, and we propose to replace each of them with broader collections of attributes, with objectivity […]

Combining independent evidence using a Bayesian approach but without standard Bayesian updating?

Nic Lewis writes: I have made some progress with my work on combining independent evidence using a Bayesian approach but eschewing standard Bayesian updating. I found a neat analytical way of doing this, to a very good approximation, in cases where each estimate of a parameter corresponds to the ratio of two variables each determined […]

Tech company wants to hire Stan programmers!

Ittai Kan writes: I started life as an academic mathematician (chaos theory) but have long since moved into industry. I am currently Chief Scientist at Afiniti, a contact center routing technology company that connects agent and callers on the basis of various factors in order to globally optimize the contact center performance. We have 17 […]

It’s not so hard to move away from hypothesis testing and toward a Bayesian approach of “embracing variation and accepting uncertainty.”

There’s been a lot of discussion, here and elsewhere, of the problems with null hypothesis significance testing, p-values, deterministic decisions, type 1 error rates, and all the rest. And I’ve recommended that people switch to a Bayesian approach, “embracing variation and accepting uncertainty,” as demonstrated (I hope) in my published applied work. But we recently […]

“Scalable Bayesian Inference with Hamiltonian Monte Carlo” (Michael Betancourt’s talk this Thurs at Columbia)

Scalable Bayesian Inference with Hamiltonian Monte Carlo Despite the promise of big data, inferences are often limited not by sample size but rather by systematic effects. Only by carefully modeling these effects can we take full advantage of the data—big data must be complemented with big models and the algorithms that can fit them. One […]

Gilovich doubles down on hot hand denial

[cat picture] A correspondent pointed me to this Freaknomics radio interview with Thomas Gilovich, one of the authors of that famous “hot hand” paper from 1985, “Misperception of Chance Processes in Basketball.” Here’s the key bit from the Freakonomics interview: DUBNER: Right. The “hot-hand notion” or maybe the “hot-hand fallacy.” GILOVICH: Well, everyone who’s ever […]