Skip to content
Archive of posts filed under the Decision Theory category.

No tradeoff between regularization and discovery

We had a couple recent discussions regarding questionable claims based on p-values extracted from forking paths, and in both cases (a study “trying large numbers of combinations of otherwise-unused drugs against a large number of untreatable illnesses,” and a salami-slicing exercise looking for public opinion changes in subgroups of the population), I recommended fitting a […]

Freelance orphans: “33 comparisons, 4 are statistically significant: much more than the 1.65 that would be expected by chance alone, so what’s the problem??”

From someone who would prefer to remain anonymous: As you may know, the relatively recent “orphan drug” laws allow (basically) companies that can prove an off-patent drug treats an otherwise untreatable illness, to obtain intellectual property protection for otherwise generic or dead drugs. This has led to a new business of trying large numbers of […]

Mick Cooney: case study on modeling loss curves in insurance with RStan

This is great. Thanks, Mick! All the Stan case studies are here.

When do we want evidence-based change? Not “after peer review”

Jonathan Falk sent me the above image in an email with subject line, “If this isn’t the picture for some future blog entry I’ll never forgive you.” This was a credible threat so here’s the post. But I don’t agree with that placard at all! Waiting for peer review is a bad idea for two […]

Workshop on Interpretable Machine Learning

Andrew Gordon Wilson sends along this conference announcement: NIPS 2017 Symposium Interpretable Machine Learning Long Beach, California, USA December 7, 2017 Call for Papers: We invite researchers to submit their recent work on interpretable machine learning from a wide range of approaches, including (1) methods that are designed to be more interpretable from the start, […]

I respond to E. J.’s response to our response to his comment on our paper responding to his paper

In response to my response and X’s response to his comment on our paper responding to his paper, E. J. writes: Empirical claims often concern the presence of a phenomenon. In such situations, any reasonable skeptic will remain unconvinced when the data fail to discredit the point-null. . . . When your goal is to […]

I disagree with Tyler Cowen regarding a so-called lack of Bayesianism in religious belief

Tyler Cowen writes: I am frustrated by the lack of Bayesianism in most of the religious belief I observe. I’ve never met a believer who asserted: “I’m really not sure here. But I think Lutheranism is true with p = .018, and the next strongest contender comes in only at .014, so call me Lutheran.” […]

I’m not on twitter

This blog auto-posts. But I’m not on twitter. You can tweet at me all you want; I won’t hear it (unless someone happens to tell me about it). So if there’s anything buggin ya, put it in a blog comment.

Should we worry about rigged priors? A long discussion.

Today’s discussion starts with Stuart Buck, who came across a post by John Cook linking to my post, “Bayesian statistics: What’s it all about?”. Cook wrote about the benefit of prior distributions in making assumptions explicit. Buck shared Cook’s post with Jon Baron, who wrote: My concern is that if researchers are systematically too optimistic […]

BREAKING . . . . . . . PNAS updates its slogan!

I’m so happy about this, no joke. Here’s the story. For awhile I’ve been getting annoyed by the junk science papers (for example, here, here, and here) that have been published by the Proceedings of the National Academy of Sciences under the editorship of Susan T. Fiske. I’ve taken to calling it PPNAS (“Prestigious proceedings […]

When considering proposals for redefining or abandoning statistical significance, remember that their effects on science will only be indirect!

John Schwenkler organized a discussion on this hot topic, featuring posts by – Dan Benjamin, Jim Berger, Magnus Johannesson, Valen Johnson, Brian Nosek, and E. J. Wagenmakers – Felipe De Brigard – Kenny Easwaran – Andrew Gelman and Blake McShane – Kiley Hamlin – Edouard Machery – Deborah Mayo – “Neuroskeptic” – Michael Strevens – […]

Alan Sokal’s comments on “Abandon Statistical Significance”

The physicist and science critic writes: I just came across your paper “Abandon statistical significance”. I basically agree with your point of view, but I think you could have done more to *distinguish* clearly between several different issues: 1) In most problems in the biomedical and social sciences, the possible hypotheses are parametrized by a […]

2 quick calls

Kevin Lewis asks what I think of these: Study 1: Using footage from body-worn cameras, we analyze the respectfulness of police officer language toward white and black community members during routine traffic stops. We develop computational linguistic methods that extract levels of respect automatically from transcripts, informed by a thin-slicing study of participant ratings of […]

Response to some comments on “Abandon Statistical Significance”

The other day, Blake McShane, David Gal, Christian Robert, Jennifer Tackett, and I wrote a paper, Abandon Statistical Significance, that began: In science publishing and many areas of research, the status quo is a lexicographic decision rule in which any result is first required to have a p-value that surpasses the 0.05 threshold and only […]

“5 minutes? Really?”

Bob writes: Daniel says this issue is an easy 5-minute fix. In my ongoing role as wet blanket, let’s be realistic. It’s sort of like saying it’s an hour from here to Detroit because that’s how long the plane’s in the air. Nothing is a 5 minute fix (door to door) for Stan and […]

“From ‘What If?’ To ‘What Next?’ : Causal Inference and Machine Learning for Intelligent Decision Making”

Panos Toulis writes in to announce this conference: NIPS 2017 Workshop on Causal Inference and Machine Learning (WhatIF2017) “From ‘What If?’ To ‘What Next?’ : Causal Inference and Machine Learning for Intelligent Decision Making” — December 8th 2017, Long Beach, USA. Submission deadline for abstracts and papers: October 31, 2017 Acceptance decisions: November 7, 2017 […]

I am (somewhat) in agreement with Fritz Strack regarding replications

Fritz Strack read the recent paper of McShane, Gal, Robert, Tackett, and myself and pointed out that our message—abandon statistical significance, consider null hypothesis testing as just one among many pieces of evidence, recognize that all null hypotheses are false (at least in the fields where Strack and I do our research) and don’t use […]

Abandon Statistical Significance

Blake McShane, David Gal, Christian Robert, Jennifer Tackett, and I wrote a short paper arguing for the removal of null hypothesis significance testing from its current gatekeeper role in much of science. We begin: In science publishing and many areas of research, the status quo is a lexicographic decision rule in which any result is […]

Using black-box machine learning predictions as inputs to a Bayesian analysis

Following up on this discussion [Designing an animal-like brain: black-box “deep learning algorithms” to solve problems, with an (approximately) Bayesian “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences], Mike Betancourt writes: I’m not sure AI (or machine learning) + Bayesian wrapper would address the points raised in the paper. […]

Causal inference using data from a non-representative sample

Dan Gibbons writes: I have been looking at using synthetic control estimates for estimating the effects of healthcare policies, particularly because for say county-level data the nontreated comparison units one would use in say a difference-in-differences estimator or quantile DID estimator (if one didn’t want to use the mean) are not especially clear. However, given […]