Skip to content
Archive of posts filed under the Decision Theory category.

On deck very soon

A bunch of the 170 are still in the queue. I haven’t been adding to the scheduled posts for awhile, instead I’ve been inserting topical items from time to time—I even got some vicious hate mail for my article on the electoral college—and then I’ve been shoving material for new posts into a big file […]

An efficiency argument for post-publication review

This came up in a discussion last week: We were talking about problems with the review process in scientific journals, and a commenter suggested that prepublication review should be more rigorous: There are lot of statistical missteps you just can’t catch until you actually have the replication data in front of you to work with […]

Hark, hark! the p-value at heaven’s gate sings

Three different people pointed me to this post, in which food researcher and business school professor Brian Wansink advises Ph.D. students to “never say no”: When a research idea comes up, check it out, put some time into it and you might get some success. I like that advice and I agree with it. Or, […]

Designing an animal-like brain: black-box “deep learning algorithms” to solve problems, with an (approximately) Bayesian “consciousness” or “executive functioning organ” that attempts to make sense of all these inferences

The journal Behavioral and Brain Sciences will be publishing this paper, “Building Machines That Learn and Think Like People,” by Brenden Lake, Tomer Ullman, Joshua Tenenbaum, and Samuel Gershman. Here’s the abstract: Recent progress in artificial intelligence (AI) has renewed interest in building systems that learn and think like people. Many advances have come from […]

The social world is (in many ways) continuous but people’s mental models of the world are Boolean

Raghu Parthasarathy points me to this post and writes: I wrote after seeing one too many talks in which someone bases boolean statements about effects “existing” or “not existing” (infuriating in itself) based on “p < 0.05” or “p > 0.5”. Of course, you’ve written tons of great things on the pitfalls, errors, and general […]

How to think about the p-value from a randomized test?

Roahn Wynart asks: Scenario: I collect a lot of data for a complex psychology experiment. I put all the raw data into a computer. I program the computer to do 100 statistical tests. I assign each statistical test to a key on my keyboard. However, I do NOT execute the statistical test. Each key will […]

“So such markets were, and perhaps are, subject to bias from deep pocketed people who may be expressing preference more than actual expectation”

Geoff Buchan writes in with another theory about how prediction markets can go wrong: I did want to mention one fascinating datum on Brexit: one UK bookmaker said they received about twice as many bets on leave as on remain, but the average bet on remain was *five* times what was bet on leave, meaning […]

Using Stan in an agent-based model: Simulation suggests that a market could be useful for building public consensus on climate change

Jonathan Gilligan writes: I’m writing to let you know about a preprint that uses Stan in what I think is a novel manner: Two graduate students and I developed an agent-based simulation of a prediction market for climate, in which traders buy and sell securities that are essentially bets on what the global average temperature […]

Frustration with published results that can’t be reproduced, and journals that don’t seem to care

Thomas Heister writes: Your recent post about Per Pettersson-Lidbom frustrations in reproducing study results reminded me of our own recent experience that we had in replicating a paper in PLOSone. We found numerous substantial errors but eventually gave up as, frustratingly, the time and effort didn’t seem to change anything and the journal’s editors quite […]

“A bug in fMRI software could invalidate 15 years of brain research”

About 50 people pointed me to this press release or the underlying PPNAS research article, “Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates,” by Anders Eklund, Thomas Nichols, and Hans Knutsson, who write: Functional MRI (fMRI) is 25 years old, yet surprisingly its most common statistical methods have not been validated […]

OK, sometimes the concept of “false positive” makes sense.

Paul Alper writes: I know by searching your blog that you hold the position, “I’m negative on the expression ‘false positives.’” Nevertheless, I came across this. In the medical/police/judicial world, false positive is a very serious issue: $2 Cost of a typical roadside drug test kit used by police departments. Namely, is that white powder […]

On deck this week

The other day someone asked me why we stopped running our On Deck This Week post every Monday morning. I replied that On Deck is not needed because a few months ago I announced all our posts, in order, through mid-January. See here: My next 170 blog posts (inbox zero and a change of pace). […]

Unfinished (so far) draft blog posts

Most of the time when I start writing a blog post, I continue till its finished. As of this writing this blog has 7128 posts published, 137 scheduled, and only 434 unpublished drafts sitting in the folder. 434 might sound like a lot, but we’ve been blogging for over 10 years, and a bunch of […]

Josh Miller hot hand talks in NYC and Pittsburgh this week

Joshua Miller (the person who, with Adam Sanjurjo, discovered why the so-called “hot hand fallacy” is not really a fallacy) will be speaking on the topic this week. In New York, Thurs 17 Nov, 12:30pm, 19 W 4th St, room 517, Center for Experimental Social Science seminar. In Pittsburgh, Fri 18 Nov, 12pm, 4716 Posvsar […]

Should scientists be allowed to continue to play in the sandbox after they’ve pooped in it?

This picture is on topic (click to see the link), but I’d like it even if it weren’t! I think I’ll be illustrating all my posts for awhile with adorable cat images. This is a lot more fun than those Buzzfeed-style headlines we were doing a few months ago. . . . Anyway, back to […]

How effective (or counterproductive) is universal child care? Part 2

This is the second of a series of two posts. Yesterday we discussed the difficulties of learning from a small, noisy experiment, in the context of a longitudinal study conducted in Jamaica where researchers reported that an early-childhood intervention program caused a 42%, or 25%, gain in later earnings. I expressed skepticism. Today I want […]

How effective (or counterproductive) is universal child care? Part 1

This is the first of a series of two posts. We’ve talked before about various empirically-based claims of the effectiveness of early childhood intervention. In a much-publicized 2013 paper based on a study of 130 four-year-old children in Jamaica, Paul Gertler et al. claimed that a particular program caused a 42% increase in the participants’ […]

What is the chance that your vote will decide the election? Ask Stan!

I was impressed by Pierre-Antoine Kremp’s open-source poll aggregator and election forecaster (all in R and Stan with an automatic data feed!) so I wrote to Kremp: I was thinking it could be fun to compute probability of decisive vote by state, as in this paper. This can be done with some not difficult but […]

Different election forecasts not so different

Yeah, I know, I need to work some on the clickbait titles . . . Anyway, people keep asking me why different election forecasts are so different. At the time of this writing, Nate Silver gives Clinton a 66.2% [ugh! See Pedants Corner below] chance of winning the election while Drew Linzer, for example, gives […]

How to improve science reporting? Dan Vergano sez: It’s not about reality, it’s all about a salary

I happened to be looking up some things on cat-owner Dan Kahan’s blog and I came across this interesting comment from 2013 that I’d not noticed before. The comment came from science journalist Dan Vergano, and it was in response to a post of Kahan that discussed an article of mine that had given advice […]