Skip to content
Archive of posts filed under the Decision Theory category.

“Luckily, medicine is a practice that ignores the requirements of science in favor of patient care.”

Javier Benitez writes: This is a paragraph from Kathryn Montgomery’s book, How Doctors Think: If medicine were practiced as if it were a science, even a probabilistic science, my daughter’s breast cancer might never have been diagnosed in time. At 28, she was quite literally off the charts, far too young, an unlikely patient who […]

Pizzagate and Kahneman, two great flavors etc.

1. The pizzagate story (of Brian Wansink, the Cornell University business school professor and self-described “world-renowned eating behavior expert for over 25 years”) keeps developing. Last week someone forwarded me an email from the deputy dean of the Cornell business school regarding concerns about some of Wansink’s work. This person asked me to post the […]

Measurement error and the replication crisis

Alison McCook from Retraction Watch interviewed Eric Loken and me regarding our recent article, “Measurement error and the replication crisis.” We talked about why traditional statistics are often counterproductive to research in the human sciences. Here’s the interview: Retraction Watch: Your article focuses on the “noise” that’s present in research studies. What is “noise” and […]

Theoretical statistics is the theory of applied statistics: how to think about what we do (My talk at the University of Michigan this Friday 3pm)

Theoretical statistics is the theory of applied statistics: how to think about what we do Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Working scientists and engineers commonly feel that philosophy is a waste of time. But theoretical and philosophical principles can guide practice, so it makes sense for us to […]

The “What does not kill my statistical significance makes it stronger” fallacy

As anyone who’s designed a study and gathered data can tell you, getting statistical significance is difficult. Lots of our best ideas don’t pan out, and even if a hypothesis seems to be supported by the data, the magic “p less than .05” can be elusive. And we also know that noisy data and small […]

Long Shot

Frank Harrell doesn’t like p-values: In my [Frank’s] opinion, null hypothesis testing and p-values have done significant harm to science. The purpose of this note is to catalog the many problems caused by p-values. As readers post new problems in their comments, more will be incorporated into the list, so this is a work in […]

Pizzagate, or the curious incident of the researcher in response to people pointing out 150 errors in four of his papers

There are a bunch of things about this story that just don’t make a lot of sense to me. For those who haven’t been following the blog recently, here’s the quick backstory: Brian Wansink is a Cornell University business school professor and self-described “world-renowned eating behavior expert for over 25 years.” It’s come out that […]

Criticism of bad research: More harm than good?

We’ve had some recent posts (here and here) about the research of Brian Wansink, a Cornell University business professor who’s found fame and fortune from doing empirical research on eating behaviors. It’s come out that four of his recent papers—all of them derived from a single experiment which Wansink himself described as a “failed study […]

No guru, no method, no teacher, Just you and I and nature . . . in the garden. Of forking paths.

Here’s a quote: Instead of focusing on theory, the focus is on asking and answering practical research questions. It sounds eminently reasonable, yet in context I think it’s completely wrong. I will explain. But first some background. Junk science and statistics They say that hard cases make bad law. But bad research can make good […]

How to attack human rights and the U.S. economy at the same time

I received this email from a postdoc in a technical field: As you might have heard, Trump signed an executive order today issuing a 30-day total suspension of visas and other immigration benefits for the citizens of Iran and six other countries. For my wife and me, this means that our visas are suspended; we […]

Looking for rigor in all the wrong places

My talk in the upcoming conference on Inference from Non Probability Samples, 16-17 Mar in Paris: Looking for rigor in all the wrong places What do the following ideas and practices have in common: unbiased estimation, statistical significance, insistence on random sampling, and avoidance of prior information? All have been embraced as ways of enforcing […]

Stan is hiring! hiring! hiring! hiring!

[insert picture of adorable cat entwined with Stan logo] We’re hiring postdocs to do Bayesian inference. We’re hiring programmers for Stan. We’re hiring a project manager. How many people we hire depends on what gets funded. But we’re hiring a few people for sure. We want the best best people who love to collaborate, who […]

To know the past, one must first know the future: The relevance of decision-based thinking to statistical analysis

We can break up any statistical problem into three steps: 1. Design and data collection. 2. Data analysis. 3. Decision making. It’s well known that step 1 typically requires some thought of steps 2 and 3: It is only when you have a sense of what you will do with your data, that you can […]

Time Inc. stoops to the level of the American Society of Human Genetics and PPNAS?

We fiddle while Rome burns: p-value edition

Raghu Parthasarathy presents a wonderfully clear example of disastrous p-value-based reasoning that he saw in a conference presentation. Here’s Raghu: Consider, for example, some tumorous cells that we can treat with drugs 1 and 2, either alone or in combination. We can make measurements of growth under our various drug treatment conditions. Suppose our measurements […]

“Which curve fitting model should I use?”

Oswaldo Melo writes: I have learned many of curve fitting models in the past, including their technical and mathematical details. Now I have been working on real-world problems and I face a great shortcoming: which method to use. As an example, I have to predict the demand of a product. I have a time series […]

Two unrelated topics in one post: (1) Teaching useful algebra classes, and (2) doing more careful psychological measurements

Kevin Lewis and Paul Alper send me so much material, I think they need their own blogs. In the meantime, I keep posting the stuff they send me, as part of my desperate effort to empty my inbox. 1. From Lewis: “Should Students Assessed as Needing Remedial Mathematics Take College-Level Quantitative Courses Instead? A Randomized […]

Sethi on Schelling

Interesting appreciation from an economist.

“Dirty Money: The Role of Moral History in Economic Judgments”

Recently in the sister blog . . . Arber Tasimi and his coauthor write: Although traditional economic models posit that money is fungible, psychological research abounds with examples that deviate from this assumption. Across eight experiments, we provide evidence that people construe physical currency as carrying traces of its moral history. In Experiments 1 and […]

Steve Fienberg

I did not know Steve Fienberg well, but I met him several times and encountered his work on various occasions, which makes sense considering his research area was statistical modeling as applied to social science. Fienberg’s most influential work must have been his books on the analysis of categorical data, work that was ahead of […]