Skip to content
Archive of posts filed under the Decision Theory category.

On deck this week

Mon: Cognitive skills rising and falling Tues: Anti-cheating robots Wed: Mindset interventions are a scalable treatment for academic underachievement — or not? Thurs: Most successful blog post ever Fri: Political advertising update Sat: Doomed to fail: A pre-registration site for parapsychology Sun: Mars Missions are a Scam Also, don’t forget what’s on deck for the […]

Flamebait: “Mathiness” in economics and political science

Political scientist Brian Silver points me to his post by economist Paul Romer, who writes: The style that I [Romer] am calling mathiness lets academic politics masquerade as science. Like mathematical theory, mathiness uses a mixture of words and symbols, but instead of making tight links, it leaves ample room for slippage between statements in […]

How to use lasso etc. in political science?

Tom Swartz writes: I am a graduate student at Oxford with a background in economics and on the side am teaching myself more statistics and machine learning. I’ve been following your blog for some time and recently came across this post on lasso. In particular, the more I read about the machine learning community, the […]

On deck through the rest of 2015

There’s something for everyone! I had a lot of fun just copying the titles to make this list, as I’d already forgotten about a lot of this stuff. Here are the scheduled posts, in order through 31 Dec: Fitting models with discrete parameters in Stan How to use lasso etc. in political science? An unconvincing […]

Annals of Spam

OK, explain to me this email: God day, How are you? My name is **. I came across your contact email at the University of Cyprus, Department of Economics. I seek for a private Economics teacher for my Daughter. I would like to know if you would be available for job. If you would be […]

“I do not agree with the view that being convinced an effect is real relieves a researcher from statistically testing it.”

Florian Wickelmaier writes: I’m writing to tell you about my experiences with another instance of “the difference between significant and not significant.” In a lab course, I came across a paper by Costa et al. [Cognition 130 (2) (2014) 236-254 ( In several experiments, they compare the effects in two two-by-two tables by comparing the […]

Have weak data. But need to make decision. What to do?

Vlad Malik writes: I just re-read your article “Of Beauty, Sex and Power”. In my line of work (online analytics), low power is a recurring, existential problem. Do we act on this data or not? If not, why are we even in this business? That’s our daily struggle. Low power seems to create a sort […]

On deck this week

Mon: Have weak data. But need to make decision. What to do? Tues: “I do not agree with the view that being convinced an effect is real relieves a researcher from statistically testing it.” Wed: Optimistic or pessimistic priors Thurs: Draw your own graph! Fri: Low-power pose Sat: Annals of Spam Sun: The Final Bug, […]

“The frequentist case against the significance test”

Richard Morey writes: I suspect that like me, many people didn’t get a whole lot of detail about Neyman’s objections to the significance test in their statistical education besides “Neyman thought power is important”. Given the recent debate about significance testing, I have gone back to Neyman’s papers and tried to summarize, for the modern […]


Leonid Schneider writes: I am cell biologist turned science journalist after 13 years in academia. Despite my many years experience as scientist, I shamefully admit to be largely incompetent in statistics. My request to you is as follows: A soon to be published psychology study set on to reproduce 100 randomly picked earlier studies and […]

Medical decision making under uncertainty

Gur Huberman writes: The following crossed my mind, following a recent panel discussion in which David Madigan participated on evidence-based medicine. The panel—especially John Iaonnidis—sang the praise of clinical trials. You may have nothing wise to say about it—or pose the question to your blog followers. Suppose there’s a standard clinical procedure to address a […]

The aching desire for regular scientific breakthroughs

This post didn’t come out the way I planned. Here’s what happened. I cruised over to the British Psychological Society Research Digest (formerly on our blogroll) and came across a press release entitled “Background positive music increases people’s willingness to do others harm.” Uh oh, I thought. This sounds like one of those flaky studies, […]

On deck this week

Mon: Review of The Martian Tues: Even though it’s published in a top psychology journal, she still doesn’t believe it Wed: Turbulent Studies, Rocky Statistics: Publicational Consequences of Experiencing Inferential Instability Thurs: Medical decision making under uncertainty Fri: Unreplicable Sat: “The frequentist case against the significance test” Sun: Erdos bio for kids

On deck this week

Mon: Comments on Imbens and Rubin causal inference book Tues: “Dow 36,000″ guy offers an opinion on Tom Brady’s balls. The rest of us are supposed to listen? Wed: Irwin Shaw: “I might mistrust intellectuals, but I’d mistrust nonintellectuals even more.” Thurs: Death of a statistician Fri: Being polite vs. saying what we really think […]

P-values and statistical practice

What is a p-value in practice? The p-value is a measure of discrepancy of the fit of a model or “null hypothesis” H to data y. In theory the p-value is a continuous measure of evidence, but in practice it is typically trichotomized approximately into strong evidence, weak evidence, and no evidence (these can also […]

To understand the replication crisis, imagine a world in which everything was published.

John Snow points me to this post by psychology researcher Lisa Feldman Barrett who reacted to the recent news on the non-replication of many psychology studies with a contrarian, upbeat take, entitled “Psychology Is Not in Crisis.” Here’s Barrett: An initiative called the Reproducibility Project at the University of Virginia recently reran 100 psychology experiments […]

Uri Simonsohn warns us not to be falsely reassured

I agree with Uri Simonsohn that you don’t learn much by looking at the distribution of all the p-values that have appeared in some literature. Uri explains: Most p-values reported in most papers are irrelevant for the strategic behavior of interest. Covariates, manipulation checks, main effects in studies testing interactions, etc. Including them we underestimate […]

On deck this week

Mon: Constructing an informative prior using meta-analysis Tues: Stan attribution Wed: Cannabis/IQ follow-up: Same old story Thurs: Defining conditional probability Fri: In defense of endless arguments Sat: Emails I never finished reading Sun: BREAKING . . . Sepp Blatter accepted $2M payoff from Dennis Hastert

Performing design calculations (type M and type S errors) on a routine basis?

Somebody writes writes: I am conducting a survival analysis (median follow up ~10 years) of subjects who enrolled on a prospective, non-randomized clinical trial for newly diagnosed multiple myeloma. The data were originally collected for research purposes and specifically to determine PFS and OS of the investigational regimen versus historic controls. The trial has been […]

“The belief was so strong that it trumped the evidence before them.”

I was reading Palko on the 5 cent cup of coffee and spotted this: We’ve previously talked about bloggers trying to live on a food stamp budget for a week (yeah, that’s a thing). One of the many odd recurring elements of these post is a litany of complaints about life without caffeine because… I had already understood […]