Skip to content
Archive of posts filed under the Decision Theory category.

What if I were to stop publishing in journals?

In our recent discussion of modes of publication, Joseph Wilson wrote, “The single best reform science can make right now is to decouple publication from career advancement, thereby reducing the number of publications by an order of magnitude and then move to an entirely disjointed, informal, online free-for-all communication system for research results.” My first […]

On deck this week: Things people sent me

Mon: Preregistration: what’s in it for you? Tues: What if I were to stop publishing in journals? Wed: Empirical implications of Empirical Implications of Theoretical Models Thurs: An Economist’s Guide to Visualizing Data Fri: The maximal information coefficient Sat: Problematic interpretations of confidence intervals Sun: The more you look, the more you find

Hipmunk worked

In the past I’ve categorized Hipmunk as a really cool flight-finder that doesn’t actually work, as worse than Expedia, and as graphics without content. So, I thought it would be only fair to tell you that I bought a flight the other day using Hipmunk and it gave me the same flight as Expedia but […]

Selection bias in the reporting of shaky research

I’ll reorder this week’s posts a bit in order to continue on a topic that came up yesterday. A couple days ago a reporter wrote to me asking what I thought of this paper on Money, Status, and the Ovulatory Cycle. I responded: Given the quality of the earlier paper by these researchers, I’m not […]

How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

I had a brief email exchange with Jeff Leek regarding our recent discussions of replication, criticism, and the self-correcting process of science. Jeff writes: (1) I can see the problem with serious, evidence-based criticisms not being published in the same journal (and linked to) studies that are shown to be incorrect. I have been mostly […]

“What Can we Learn from the Many Labs Replication Project?”

Aki points us to this discussion from Rolf Zwaan: The first massive replication project in psychology has just reached completion (several others are to follow). . . . What can we learn from the ManyLabs project? The results here show the effect sizes for the replication efforts (in green and grey) as well as the […]

Econometrics, political science, epidemiology, etc.: Don’t model the probability of a discrete outcome, model the underlying continuous variable

This is an echo of yesterday’s post, Basketball Stats: Don’t model the probability of win, model the expected score differential. As with basketball, so with baseball: as the great Bill James wrote, if you want to predict a pitcher’s win-loss record, it’s better to use last year’s ERA than last year’s W-L. As with basketball […]

“Edlin’s rule” for routinely scaling down published estimates

A few months ago I reacted (see further discussion in comments here) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults […]

On deck this week

Mon: “Edlin’s rule” for routinely scaling down published estimates Tues: Basketball Stats: Don’t model the probability of win, model the expected score differential Wed: A good comment on one of my papers Thurs: “What Can we Learn from the Many Labs Replication Project?” Fri: God/leaf/tree Sat: “We are moving from an era of private data […]

The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential […]