Skip to content
Archive of posts filed under the Decision Theory category.

Using prior knowledge in frequentist tests

Christian Bartels send along this paper, which he described as an attempt to use informative priors for frequentist test statistics. I replied: I’ve not tried to follow the details but this reminds me of our paper on posterior predictive checks. People think of this as very Bayesian but my original idea when doing this research […]

Would you prefer three N=300 studies or one N=900 study?

Stephen Martin started off with a question: I’ve been thinking about this thought experiment: — Imagine you’re given two papers. Both papers explore the same topic and use the same methodology. Both were preregistered. Paper A has a novel study (n1=300) with confirmed hypotheses, followed by two successful direct replications (n2=300, n3=300). Paper B has […]

Reputational incentives and post-publication review: two (partial) solutions to the misinformation problem

So. There are erroneous analyses published in scientific journals and in the news. Here I’m not talking not about outright propaganda, but about mistakes that happen to coincide with the preconceptions of their authors. We’ve seen lots of examples. Here are just a few: – Political scientist Larry Bartels is committed to a model of […]

Stacking, pseudo-BMA, and AIC type weights for combining Bayesian predictive distributions

This post is by Aki. We have often been asked in the Stan user forum how to do model combination for Stan models. Bayesian model averaging (BMA) by computing marginal likelihoods is challenging in theory and even more challenging in practice using only the MCMC samples obtained from the full model posteriors. Some users have […]

Beyond subjective and objective in statistics: my talk with Christian Hennig tomorrow (Wed) 5pm in London

Christian Hennig and I write: Decisions in statistical data analysis are often justified, criticized, or avoided using concepts of objectivity and subjectivity. We argue that the words “objective” and “subjective” in statistics discourse are used in a mostly unhelpful way, and we propose to replace each of them with broader collections of attributes, with objectivity […]

Tech company wants to hire Stan programmers!

Ittai Kan writes: I started life as an academic mathematician (chaos theory) but have long since moved into industry. I am currently Chief Scientist at Afiniti, a contact center routing technology company that connects agent and callers on the basis of various factors in order to globally optimize the contact center performance. We have 17 […]

Gilovich doubles down on hot hand denial

A correspondent pointed me to this Freaknomics radio interview with Thomas Gilovich, one of the authors of that famous “hot hand” paper from 1985, “Misperception of Chance Processes in Basketball.” Here’s the key bit from the Freakonomics interview: DUBNER: Right. The “hot-hand notion” or maybe the “hot-hand fallacy.” GILOVICH: Well, everyone who’s ever played the […]

Let’s accept the idea that treatment effects vary—not as something special but just as a matter of course

Tyler Cowen writes: Does knowing the price lower your enjoyment of goods and services? I [Cowen] don’t quite agree with this as stated, as the experience of enjoying a bargain can make it more pleasurable, or at least I have seen this for many people. Some in fact enjoy the bargain only, not the actual […]

This could be a big deal: the overuse of psychotropic medications for advanced Alzheimer’s patients

I received the following email, entitled “A research lead (potentially bigger than the opioid epidemic,” from someone who wishes to remain anonymous: My research lead is related to the use of psychotropic medications in Alzheimer’s patients. I should note that strong cautions have already been issued with respect to the use of these medications in […]

Some natural solutions to the p-value communication problem—and why they won’t work

Blake McShane and David Gal recently wrote two articles (“Blinding us to the obvious? The effect of statistical training on the evaluation of evidence” and “Statistical significance and the dichotomization of evidence”) on the misunderstandings of p-values that are common even among supposed experts in statistics and applied social research. The key misconception has nothing […]

Yes, it makes sense to do design analysis (“power calculations”) after the data have been collected

This one has come up before but it’s worth a reminder. Stephen Senn is a thoughtful statistician and I generally agree with his advice but I think he was kinda wrong on this one. Wrong in an interesting way. Senn’s article is from 2002 and it is called “Power is indeed irrelevant in interpreting completed […]

Facebook’s Prophet uses Stan

Sean Taylor, a research scientist at Facebook and Stan user, writes: I wanted to tell you about an open source forecasting package we just released called Prophet:  I thought the readers of your blog might be interested in both the package and the fact that we built it on top of Stan. Under the hood, […]

Theoretical statistics is the theory of applied statistics: how to think about what we do (My talk Wednesday—today!—4:15pm at the Harvard statistics dept)

Theoretical statistics is the theory of applied statistics: how to think about what we do Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Working scientists and engineers commonly feel that philosophy is a waste of time. But theoretical and philosophical principles can guide practice, so it makes sense for us to […]

Looking for rigor in all the wrong places (my talk this Thursday in the Columbia economics department)

Looking for Rigor in All the Wrong Places What do the following ideas and practices have in common: unbiased estimation, statistical significance, insistence on random sampling, and avoidance of prior information? All have been embraced as ways of enforcing rigor but all have backfired and led to sloppy analyses and erroneous inferences. We discuss these […]

“Luckily, medicine is a practice that ignores the requirements of science in favor of patient care.”

Javier Benitez writes: This is a paragraph from Kathryn Montgomery’s book, How Doctors Think: If medicine were practiced as if it were a science, even a probabilistic science, my daughter’s breast cancer might never have been diagnosed in time. At 28, she was quite literally off the charts, far too young, an unlikely patient who […]

Pizzagate and Kahneman, two great flavors etc.

1. The pizzagate story (of Brian Wansink, the Cornell University business school professor and self-described “world-renowned eating behavior expert for over 25 years”) keeps developing. Last week someone forwarded me an email from the deputy dean of the Cornell business school regarding concerns about some of Wansink’s work. This person asked me to post the […]

Measurement error and the replication crisis

Alison McCook from Retraction Watch interviewed Eric Loken and me regarding our recent article, “Measurement error and the replication crisis.” We talked about why traditional statistics are often counterproductive to research in the human sciences. Here’s the interview: Retraction Watch: Your article focuses on the “noise” that’s present in research studies. What is “noise” and […]

Theoretical statistics is the theory of applied statistics: how to think about what we do (My talk at the University of Michigan this Friday 3pm)

Theoretical statistics is the theory of applied statistics: how to think about what we do Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Working scientists and engineers commonly feel that philosophy is a waste of time. But theoretical and philosophical principles can guide practice, so it makes sense for us to […]

The “What does not kill my statistical significance makes it stronger” fallacy

As anyone who’s designed a study and gathered data can tell you, getting statistical significance is difficult. Lots of our best ideas don’t pan out, and even if a hypothesis seems to be supported by the data, the magic “p less than .05” can be elusive. And we also know that noisy data and small […]

Long Shot

Frank Harrell doesn’t like p-values: In my [Frank’s] opinion, null hypothesis testing and p-values have done significant harm to science. The purpose of this note is to catalog the many problems caused by p-values. As readers post new problems in their comments, more will be incorporated into the list, so this is a work in […]