After three years, we finally have an updated version of our “EP as a way of life” paper. Authors are Andrew Gelman, Aki Vehtari, Pasi Jylänki, Tuomas Sivula, Dustin Tran, Swupnil Sahai, Paul Blomstedt, John Cunningham, David Schiminovich, and Christian Robert. Aki deserves credit for putting this all together into a coherent whole. Here’s the […]

**Bayesian Statistics**category.

## A fistful of Stan case studies: divergences and bias, identifying mixtures, and weakly informative priors

Following on from his talk at StanCon, Michael Betancourt just wrote three Stan case studies, all of which are must reads: Diagnosing Biased Inference with Divergences: This case study discusses the subtleties of accurate Markov chain Monte Carlo estimation and how divergences can be used to identify biased estimation in practice. Identifying Bayesian Mixture […]

## How to interpret confidence intervals?

Jason Yamada-Hanff writes: I’m a Neuroscience PhD reforming my statistics education. I am a little confused about how you treat confidence intervals in the book and was hoping you could clear things up for me. Through your blog, I found Richard Morey’s paper (and further readings) about confidence interval interpretations. If I understand correctly, the […]

## Yes, it makes sense to do design analysis (“power calculations”) after the data have been collected

This one has come up before but it’s worth a reminder. Stephen Senn is a thoughtful statistician and I generally agree with his advice but I think he was kinda wrong on this one. Wrong in an interesting way. Senn’s article is from 2002 and it is called “Power is indeed irrelevant in interpreting completed […]

## Facebook’s Prophet uses Stan

Sean Taylor, a research scientist at Facebook and Stan user, writes: I wanted to tell you about an open source forecasting package we just released called Prophet: I thought the readers of your blog might be interested in both the package and the fact that we built it on top of Stan. Under the hood, […]

## Theoretical statistics is the theory of applied statistics: how to think about what we do (My talk Wednesday—today!—4:15pm at the Harvard statistics dept)

Theoretical statistics is the theory of applied statistics: how to think about what we do Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Working scientists and engineers commonly feel that philosophy is a waste of time. But theoretical and philosophical principles can guide practice, so it makes sense for us to […]

## Is Rigor Contagious? (my talk next Monday 4:15pm at Columbia)

Is Rigor Contagious? Much of the theory and practice of statistics and econometrics is characterized by a toxic mixture of rigor and sloppiness. Methods are justified based on seemingly pure principles that can’t survive reality. Examples of these principles include random sampling, unbiased estimation, hypothesis testing, Bayesian inference, and causal identification. Examples of uncomfortable reality […]

## Looking for rigor in all the wrong places (my talk this Thursday in the Columbia economics department)

Looking for Rigor in All the Wrong Places What do the following ideas and practices have in common: unbiased estimation, statistical significance, insistence on random sampling, and avoidance of prior information? All have been embraced as ways of enforcing rigor but all have backfired and led to sloppy analyses and erroneous inferences. We discuss these […]

## Blind Spot

X pointed me to this news article reporting an increase in death rate among young adults in the United States: Selon une enquête publiée le 26 janvier par la revue scientifique The Lancet, le taux de mortalité des jeunes Américains âgés de 25 à 35 ans a connu une progression entre 1999 et 2014, alors […]

## Vine regression?

Jeremy Neufeld writes: I’m an undergraduate student at the University of Maryland and I was recently referred to this paper (Vine Regression, by Roger Cooke, Harry Joe, and Bo Chang), also an accompanying summary blog post by the main author) as potentially useful in policy analysis. With the big claims it makes, I am not […]

## Combining results from multiply imputed datasets

Aaron Haslam writes: I have a question regarding combining the estimates from multiply imputed datasets. In the third addition of BDA on the top of page 452, you mention that with Bayesian analyses all you have to do is mix together the simulations. I want to clarify that this means you simply combine the posteriors […]

## Lasso regression etc in Stan

Someone on the users list asked about lasso regression in Stan, and Ben replied: In the rstanarm package we have stan_lm(), which is sort of like ridge regression, and stan_glm() with family = gaussian and prior = laplace() or prior = lasso(). The latter estimates the shrinkage as a hyperparameter while the former fixes it […]

## Stan and BDA on actuarial syllabus!

Avi Adler writes: I am pleased to let you know that the Casualty Actuarial Society has announced two new exams and released their initial syllabi yesterday. Specifically, 50%–70% of the Modern Actuarial Statistics II exam covers Bayesian Analysis and Markov Chain Monte Carlo. The official text we will be using is BDA3 and while we […]

## HMMs in Stan? Absolutely!

I was having a conversation with Andrew that went like this yesterday: Andrew: Hey, someone’s giving a talk today on HMMs (that someone was Yang Chen, who was giving a talk based on her JASA paper Analyzing single-molecule protein transportation experiments via hierarchical hidden Markov models). Maybe we should add some specialized discrete modules to […]

## Theoretical statistics is the theory of applied statistics: how to think about what we do (My talk at the University of Michigan this Friday 3pm)

Theoretical statistics is the theory of applied statistics: how to think about what we do Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Working scientists and engineers commonly feel that philosophy is a waste of time. But theoretical and philosophical principles can guide practice, so it makes sense for us to […]

## Long Shot

Frank Harrell doesn’t like p-values: In my [Frank’s] opinion, null hypothesis testing and p-values have done significant harm to science. The purpose of this note is to catalog the many problems caused by p-values. As readers post new problems in their comments, more will be incorporated into the list, so this is a work in […]

## No guru, no method, no teacher, Just you and I and nature . . . in the garden. Of forking paths.

Here’s a quote: Instead of focusing on theory, the focus is on asking and answering practical research questions. It sounds eminently reasonable, yet in context I think it’s completely wrong. I will explain. But first some background. Junk science and statistics They say that hard cases make bad law. But bad research can make good […]