Skip to content
Archive of posts filed under the Decision Theory category.

Replication controversies

I don’t know what ATR is but I’m glad somebody is on the job of prohibiting replication catastrophe: Seriously, though, I’m on a list regarding a reproducibility project, and someone forwarded along this blog by psychology researcher Simone Schnall, whose attitudes we discussed several months ago in the context of some controversies about attempted replications […]

This is what “power = .06” looks like. Get used to it.

I prepared the above image for this talk. The calculations come from the second column of page 6 of this article, and the psychology study that we’re referring to is discussed here.

On deck this week

Mon: “Why continue to teach and use hypothesis testing?” Tues: In which I play amateur political scientist Wed: Retrospective clinical trials? Thurs: “If you’re not using a proper, informative prior, you’re leaving money on the table.” Fri: Hey, NYT: Former editor Bill Keller said that any editor who fails to confront a writer about an […]

“The Statistical Crisis in Science”: My talk in the psychology department Monday 17 Nov at noon

Monday 17 Nov at 12:10pm in Schermerhorn room 200B, Columbia University: Top journals in psychology routinely publish ridiculous, scientifically implausible claims, justified based on “p < 0.05.” And this in turn calls into question all sorts of more plausible, but not necessarily true, claims, that are supported by this same sort of evidence. To put […]

If you do an experiment with 700,000 participants, you’ll (a) have no problem with statistical significance, (b) get to call it “massive-scale,” (c) get a chance to publish it in a tabloid top journal. Cool!

David Hogg points me to this post by Thomas Lumley regarding a social experiment that was performed by randomly manipulating the content in the news feed of Facebook customers. The shiny bit about the experiment is that it involved 700,000 participants (or, as the research article, by Adam Kramera, Jamie Guillory, and Jeffrey Hancock, quaintly […]

Crowdsourcing Data Analysis 2: Gender, Status, and Science

Emily Robinson writes: Brian Nosek, Eric Luis Uhlmann, Amy Sommer, Kaisa Snellman, David Robinson, Raphael Silberzahn, and I have just launched a second crowdsourcing data analysis project following the success of the first one. In the crowdsourcing analytics approach, multiple independent analysts are recruited to test the same hypothesis on the same data set in whatever […]

On deck this week

Mon: Illegal Business Controls America Tues: The history of MRP highlights some differences between political science and epidemiology Wed: “Patchwriting” is a Wegmanesque abomination but maybe there’s something similar that could be helpful? Thurs: If you do an experiment with 700,000 participants, you’ll (a) have no problem with statistical significance, (b) get to call it […]

Scientists behaving badly

By “badly,” I don’t just mean unethically or immorally; I’m also including those examples of individual scientists who are not clearly violating any ethical rules but are acting in a way as to degrade, rather than increase, our understanding of the world. In the latter case I include examples such as the senders of the […]

Debate over kidney transplant stats?

Dan Walter writes: A few years ago, in a post about Baysian statistics, you referred to a book that I wrote about a study on catheter ablation for atrial fibrillation: The Chorus of Ablationists I am writing a story on the transplant industry and am wondering about a widely cited article concerning the long term health effects of […]

Just imagine if Ed Wegman got his hands on this program—it could do wonders for his research productivity!

Brendan Nyhan writes: I’d love to see you put some data in here that you know well and evaluate how the site handles it. The webpage in question says: Upload a data set, and the automatic statistician will attempt to describe the final column of your data in terms of the rest of the data. […]