Skip to content
Archive of posts filed under the Decision Theory category.

Frustration with published results that can’t be reproduced, and journals that don’t seem to care

Thomas Heister writes: Your recent post about Per Pettersson-Lidbom frustrations in reproducing study results reminded me of our own recent experience that we had in replicating a paper in PLOSone. We found numerous substantial errors but eventually gave up as, frustratingly, the time and effort didn’t seem to change anything and the journal’s editors quite […]

“A bug in fMRI software could invalidate 15 years of brain research”

About 50 people pointed me to this press release or the underlying PPNAS research article, “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates,” by Anders Eklund, Thomas Nichols, and Hans Knutsson, who write: Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated […]

OK, sometimes the concept of “false positive” makes sense.

Paul Alper writes: I know by searching your blog that you hold the position, “I’m negative on the expression ‘false positives.’” Nevertheless, I came across this. In the medical/police/judicial world, false positive is a very serious issue: $2 Cost of a typical roadside drug test kit used by police departments. Namely, is that white powder […]

On deck this week

The other day someone asked me why we stopped running our On Deck This Week post every Monday morning. I replied that On Deck is not needed because a few months ago I announced all our posts, in order, through mid-January. See here: My next 170 blog posts (inbox zero and a change of pace). […]

Unfinished (so far) draft blog posts

Most of the time when I start writing a blog post, I continue till its finished. As of this writing this blog has 7128 posts published, 137 scheduled, and only 434 unpublished drafts sitting in the folder. 434 might sound like a lot, but we’ve been blogging for over 10 years, and a bunch of […]

Josh Miller hot hand talks in NYC and Pittsburgh this week

Joshua Miller (the person who, with Adam Sanjurjo, discovered why the so-called “hot hand fallacy” is not really a fallacy) will be speaking on the topic this week. In New York, Thurs 17 Nov, 12:30pm, 19 W 4th St, room 517, Center for Experimental Social Science seminar. In Pittsburgh, Fri 18 Nov, 12pm, 4716 Posvsar […]

Should scientists be allowed to continue to play in the sandbox after they’ve pooped in it?

This picture is on topic (click to see the link), but I’d like it even if it weren’t! I think I’ll be illustrating all my posts for awhile with adorable cat images. This is a lot more fun than those Buzzfeed-style headlines we were doing a few months ago. . . . Anyway, back to […]

How effective (or counterproductive) is universal child care? Part 2

This is the second of a series of two posts. Yesterday we discussed the difficulties of learning from a small, noisy experiment, in the context of a longitudinal study conducted in Jamaica where researchers reported that an early-childhood intervention program caused a 42%, or 25%, gain in later earnings. I expressed skepticism. Today I want […]

How effective (or counterproductive) is universal child care? Part 1

This is the first of a series of two posts. We’ve talked before about various empirically-based claims of the effectiveness of early childhood intervention. In a much-publicized 2013 paper based on a study of 130 four-year-old children in Jamaica, Paul Gertler et al. claimed that a particular program caused a 42% increase in the participants’ […]

What is the chance that your vote will decide the election? Ask Stan!

I was impressed by Pierre-Antoine Kremp’s open-source poll aggregator and election forecaster (all in R and Stan with an automatic data feed!) so I wrote to Kremp: I was thinking it could be fun to compute probability of decisive vote by state, as in this paper. This can be done with some not difficult but […]

Different election forecasts not so different

Yeah, I know, I need to work some on the clickbait titles . . . Anyway, people keep asking me why different election forecasts are so different. At the time of this writing, Nate Silver gives Clinton a 66.2% [ugh! See Pedants Corner below] chance of winning the election while Drew Linzer, for example, gives […]

How to improve science reporting? Dan Vergano sez: It’s not about reality, it’s all about a salary

I happened to be looking up some things on cat-owner Dan Kahan’s blog and I came across this interesting comment from 2013 that I’d not noticed before. The comment came from science journalist Dan Vergano, and it was in response to a post of Kahan that discussed an article of mine that had given advice […]

Science and personal decision making

Maryna Raskin interviews me on the replication crisis, power posing, and the role of speculative scientific theories in decision making.

Updating fast and slow

Paul Campos pointed me to this post from a couple of days ago in which he wrote: I think it’s fair to say that right now the consensus among elite observers across the ideological spectrum . . . is that the presidential race is over because Donald Trump has no chance of winning — or […]

Is it fair to use Bayesian reasoning to convict someone of a crime?

Ethan Bolker sends along this news article from the Boston Globe: If it doesn’t acquit, it must fit Judges and juries are only human, and as such, their brains tend to see patterns, even if the evidence isn’t all there. In a new study, researchers first presented people with pieces of evidence (a confession, an […]

It’s ok to criticize

I got a little bit of pushback on my recent post, “The difference between ‘significant’ and ‘not significant’ is not itself statistically significant: Education edition”—some commenters felt I was being too hard on the research paper I was discussing, because the research wasn’t all that bad, and the conclusions weren’t clearly wrong, and the authors […]

Don’t move Penn Station

I agree 100% with Henry Grabar on this one. Ever since I heard many years ago about the plan to blog a few billion dollars moving NYC’s Penn Station to a prettier but less convenient location, I’ve grimaced. Big shots really love to spend our money on fancy architecture, don’t they? As I wrote a […]

“Find the best algorithm (program) for your dataset.”

Piero Foscari writes: Maybe you know about this already, but I found it amazingly brutal; while looking for some reproducible research resources I stumbled onto the following at mlcomp.org (which would be nice if done properly, at least as a standardization attempt): Find the best algorithm (program) for your dataset. Upload your dataset and run existing programs on it to […]

Redemption

I’ve spent a lot of time mocking Mark Hauser on this blog, and I still find it annoying that, according to the accounts I’ve seen, he behaved unethically toward his graduate students and lab assistants, he never apologized for manipulating data, and, perhaps most unconscionably, he wasted the lives of who knows how many monkeys […]

An auto-mechanic-style sign for data sharing

Yesterday’s story reminds me of that sign you used to see at the car repair shop: Maybe we need something similar for data access rules: DATA RATES PER HOUR If you want to write a press release for us $ 50.00 If you want to write a new paper using our data $ 90.00 If […]