Skip to content
Archive of posts filed under the Decision Theory category.

On deck this week

Mon: The hype cycle starts again Tues: I (almost and inadvertently) followed Dan Kahan’s principles in my class today, and that was a good thing (would’ve even been more of a good thing had I realized what I was doing and done it better, but I think I will do better in the future, which […]

Princeton Abandons Grade Deflation Plan . . .

. . . and Kaiser Fung is unhappy. In a post entitled, “Princeton’s loss of nerve,” Kaiser writes: This development is highly regrettable, and a failure of leadership. (The new policy leaves it to individual departments to do whatever they want.) The recent Alumni publication has two articles about this topic, one penned by President […]

“If you’re not using a proper, informative prior, you’re leaving money on the table.”

Well put, Rob Weiss. This is not to say that one must always use an informative prior; oftentimes it can make sense to throw away some information for reasons of convenience. But it’s good to remember that, if you do use a noninformative prior, that you’re doing less than you could.

Replication controversies

I don’t know what ATR is but I’m glad somebody is on the job of prohibiting replication catastrophe: Seriously, though, I’m on a list regarding a reproducibility project, and someone forwarded along this blog by psychology researcher Simone Schnall, whose attitudes we discussed several months ago in the context of some controversies about attempted replications […]

This is what “power = .06” looks like. Get used to it.

I prepared the above image for this talk. The calculations come from the second column of page 6 of this article, and the psychology study that we’re referring to is discussed here.

On deck this week

Mon: “Why continue to teach and use hypothesis testing?” Tues: In which I play amateur political scientist Wed: Retrospective clinical trials? Thurs: “If you’re not using a proper, informative prior, you’re leaving money on the table.” Fri: Hey, NYT: Former editor Bill Keller said that any editor who fails to confront a writer about an […]

“The Statistical Crisis in Science”: My talk in the psychology department Monday 17 Nov at noon

Monday 17 Nov at 12:10pm in Schermerhorn room 200B, Columbia University: Top journals in psychology routinely publish ridiculous, scientifically implausible claims, justified based on “p < 0.05.” And this in turn calls into question all sorts of more plausible, but not necessarily true, claims, that are supported by this same sort of evidence. To put […]

If you do an experiment with 700,000 participants, you’ll (a) have no problem with statistical significance, (b) get to call it “massive-scale,” (c) get a chance to publish it in a tabloid top journal. Cool!

David Hogg points me to this post by Thomas Lumley regarding a social experiment that was performed by randomly manipulating the content in the news feed of Facebook customers. The shiny bit about the experiment is that it involved 700,000 participants (or, as the research article, by Adam Kramera, Jamie Guillory, and Jeffrey Hancock, quaintly […]

Crowdsourcing Data Analysis 2: Gender, Status, and Science

Emily Robinson writes: Brian Nosek, Eric Luis Uhlmann, Amy Sommer, Kaisa Snellman, David Robinson, Raphael Silberzahn, and I have just launched a second crowdsourcing data analysis project following the success of the first one. In the crowdsourcing analytics approach, multiple independent analysts are recruited to test the same hypothesis on the same data set in whatever […]

On deck this week

Mon: Illegal Business Controls America Tues: The history of MRP highlights some differences between political science and epidemiology Wed: “Patchwriting” is a Wegmanesque abomination but maybe there’s something similar that could be helpful? Thurs: If you do an experiment with 700,000 participants, you’ll (a) have no problem with statistical significance, (b) get to call it […]