Skip to content
Archive of posts filed under the Decision Theory category.

Comments on Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection

There is a recent pre-print Limitations of Bayesian Leave-One-Out Cross-Validation for Model Selection by Quentin Gronau and Eric-Jan Wagenmakers. Wagenmakers asked for comments and so here are my comments. Short version: They report a known limitation of LOO when it’s used in a non-recommended way for model selection. They report that their experiments show that […]

Ambiguities with the supposed non-replication of ego depletion

Baruch Eitam writes: I am teaching a seminar for graduate students in the social track and I decided to dedicate the first 4-6 classes to understanding the methodological crises in psychology, its reasons and some proposed solutions. In one of the classes I had the students read this paper which reports an attempt to reproduce […]

Against Screening

Matthew Simonson writes: I have a question that may be of interest to your readers (and even if not, I’d love to hear your response). I’ve been analyzing a dataset of over 100 Middle Eastern political groups (MAROB) to see how these groups react to government repression. Observations are at the group-year level and include […]

Some experiments are just too noisy to tell us much of anything at all: Political science edition

Sointu Leikas pointed us to this published research article, “Exposure to inequality affects support for redistribution.” Leikas writes that “it seems to be a really apt example of “researcher degrees of freedom.’” Here’s the abstract of the paper: As the world’s population grows more urban, encounters between members of different socioeconomic groups occur with greater […]

Garden of forking paths – poker analogy

[image of cats playing poker] Someone who wishes to remain anonymous writes: Just wanted to point out an analogy I noticed between the “garden of forking paths” concept as it relates to statistical significance testing and poker strategy (a game I’ve played as a hobby). A big part of constructing a winning poker strategy nowadays […]

Slow to update

This post is a placeholder to remind Josh Miller and me to write our paper on slow updating in decision analysis, with the paradigmatic examples being pundits being slow to update their low probabilities of Leicester City and Donald Trump in 2016. We have competing titles for this paper. Josh wants to call it, “The […]

Should Berk Özler spend $2 million to test a “5 minute patience training”?

Berk Özler writes: Background: You receive a fictional proposal from a major foundation to review. The proposal wants to look at the impact of 5 minute “patience” training on all kinds of behaviors. This is a poor country, so there are no admin data. They make the following points: A. If successful, this is really […]

“We continuously increased the number of animals until statistical significance was reached to support our conclusions” . . . I think this is not so bad, actually!

Jordan Anaya pointed me to this post, in which Casper Albers shared this snippet from a recently-published paper from an article in Nature Communications: The subsequent twitter discussion is all about “false discovery rate” and statistical significance, which I think completely misses the point. The problems Before I get to why I think the quoted […]

A model for scientific research programmes that include both “exploratory phenomenon-driven research” and “theory-testing science”

John Christie points us to an article by Klaus Fiedler, What Constitutes Strong Psychological Science? The (Neglected) Role of Diagnosticity and A Priori Theorizing, which begins: A Bayesian perspective on Ioannidis’s (2005) memorable statement that “Most Published Research Findings Are False” suggests a seemingly inescapable trade-off: It appears as if research hypotheses are based either […]

Economic growth -> healthy kids?

Joe Cummins writes: Anaka Aiyar and I have a new working paper on economic growth and child health. Any comments from you or your readers would be much appreciated. In terms of subject matter, it fits in pretty nicely with the Demography discussions on the blog (Deaton/Case, age adjustment, interpreting population level changes in meaningful […]

A quick rule of thumb is that when someone seems to be acting like a jerk, an economist will defend the behavior as being the essence of morality, but when someone seems to be doing something nice, an economist will raise the bar and argue that he’s not being nice at all.

Like Pee Wee Herman, act like a jerk And get on the dance floor let your body work I wanted to follow up on a remark from a few years ago about the two modes of pop-economics reasoning: You take some fact (or stylized fact) about the world, and then you either (1) use people-are-rational-and-who-are-we-to-judge-others […]

Proposed new EPA rules requiring open data and reproducibility

Tom Daula points to this news article by Heidi Vogt, “EPA Wants New Rules to Rely Solely on Public Data,” with subtitle, “Agency says proposal means transparency; scientists see public-health risk.” Vogt writes: The Environmental Protection Agency plans to restrict research used in developing regulations, the agency said Tuesday . . . The new proposal […]

A few words on a few words on Twitter’s 280 experiment.

Gur Huberman points us to this post by Joshua Gans, “A few words on Twitter’s 280 experiment.” I hate twitter but I took a look anyway, and I’m glad I did, as Gans makes some good points and some bad points, and it’s all interesting. Gans starts with some intriguing background: Twitter have decided to […]

Using partial pooling when preparing data for machine learning applications

Geoffrey Simmons writes: I reached out to John Mount/Nina Zumel over at Win Vector with a suggestion for their vtreat package, which automates many common challenges in preparing data for machine learning applications. The default behavior for impact coding high-cardinality variables had been a naive bayes approach, which I found to be problematic due its multi-modal output (assigning […]

It’s all about Hurricane Andrew: Do patterns in post-disaster donations demonstrate egotism?

Jim Windle points to this post discussing a paper by Jesse Chandler, Tiffany M. Griffin, and Nicholas Sorensen, “In the ‘I’ of the Storm: Shared Initials Increase Disaster Donations.” I took a quick look and didn’t notice anything clearly wrong with the paper, but there did seem to be some opportunities for forking paths, in […]

The Millennium Villages Project: a retrospective, observational, endline evaluation

Shira Mitchell et al. write (preprint version here if that link doesn’t work): The Millennium Villages Project (MVP) was a 10 year, multisector, rural development project, initiated in 2005, operating across ten sites in ten sub-Saharan African countries to achieve the Millennium Development Goals (MDGs). . . . In this endline evaluation of the MVP, […]

Don’t define reproducibility based on p-values

Lizzie Wolkovich writes: I just got asked to comment on this article [“Genotypic variability enhances the reproducibility of an ecological study,” by Alexandru Milcu et al. ]—I have yet to have time to fully sort out their stats but the first thing that hit me about it was they seem to be suggesting a way […]

A possible defense of cargo cult science?

Someone writes: I’ve been a follower of your blog and your continual coverage of “cargo cult science”. Since this type of science tends to be more influential and common than the (idealized) non-“cargo cult” stuff, I’ve been trying to find ways of reassuring myself that this type of science isn’t a bad thing (because if […]

This one’s important: How to better analyze cancer drug trials using multilevel models.

Paul Alper points us to this news article, “Cancer Conundrum—Too Many Drug Trials, Too Few Patients,” by Gina Kolata, who writes: With the arrival of two revolutionary treatment strategies, immunotherapy and personalized medicine, cancer researchers have found new hope — and a problem that is perhaps unprecedented in medical research. There are too many experimental […]

Heuristics and Biases? Laplace was there, 200 years ago.

In an article entitled Laplace’s Theories of Cognitive Illusions, Heuristics, and Biases, Josh “hot hand” Miller and I write: In his book from the early 1800s, Essai Philosophique sur les Probabilités, the mathematician Pierre-Simon de Laplace anticipated many ideas developed in the 1970s in cognitive psychology and behavioral economics, explaining human tendencies to deviate from […]