Skip to content
Archive of entries posted by

Bad Statistics: Ignore or Call Out?

Evelyn Lamb adds to the conversation that Jeff Leek and I had a few months ago. It’s a topic that’s worth returning to, in light of our continuing discussions regarding the crisis of criticism in science.

On deck this week

Mon: Bad Statistics: Ignore or Call Out? Tues: Questions about “Too Good to Be True” Wed: I disagree with Alan Turing and Daniel Kahneman regarding the strength of statistical evidence Thurs: Why isn’t replication required before publication in top journals? Fri: Confirmationist and falsificationist paradigms of science Sat: How does inference for next year’s data […]

On deck this month

Bad Statistics: Ignore or Call Out? Questions about “Too Good to Be True” I disagree with Alan Turing and Daniel Kahneman regarding the strength of statistical evidence Why isn’t replication required before publication in top journals? Confirmationist and falsificationist paradigms of science How does inference for next year’s data differ from inference for unobserved data […]

Avoiding model selection in Bayesian social research

One of my favorites, from 1995. Don Rubin and I argue with Adrian Raftery. Here’s how we begin: Raftery’s paper addresses two important problems in the statistical analysis of social science data: (1) choosing an appropriate model when so much data are available that standard P-values reject all parsimonious models; and (2) making estimates and […]

When we talk about the “file drawer,” let’s not assume that an experiment can easily be characterized as producing strong, mixed, or weak results

Neil Malhotra: I thought you might be interested in our paper [the paper is by Annie Franco, Neil Malhotra, and Gabor Simonovits, and the link is to a news article by Jeffrey Mervis], forthcoming in Science, about publication bias in the social sciences given your interest and work on research transparency. Basic summary: We examined […]

Pre-election survey methodology: details from nine polling organizations, 1988 and 1992

This one from 1995 (with D. Stephen Voss and Gary King) was fun. For our “Why are American Presidential election campaign polls so variable when votes are so predictable?” project a few years earlier, Gary and I had analyzed individual-level survey responses from 60 pre-election polls that had been conducted by several different polling organizations. […]

Discussion of “A probabilistic model for the spatial distribution of party support in multiparty elections”

From 1994. I don’t have much to say about this one. The paper I was discussing (by Samuel Merrill) had already been accepted by the journal—I might even have been a referee, in which case the associate editor had decided to accept the paper over my objections—and the editor gave me the opportunity to publish […]

Dave Blei course on Foundations of Graphical Models

Dave Blei writes: This course is cross listed in Computer Science and Statistics at Columbia University. It is a PhD level course about applied probabilistic modeling. Loosely, it will be similar to this course. Students should have some background in probability, college-level mathematics (calculus, linear algebra), and be comfortable with computer programming. The course is […]

Review of “Forecasting Elections”

From 1993. The topic of election forecasting sure gets a lot more attention than it used to! Here are some quotes from my review of that book by Michael Lewis-Beck and Tom Rice: Political scientists are aware that most voters are consistent in their preferences, and one can make a good guess just looking at […]

Discussion of “Maximum entropy and the nearly black object”

From 1992. It’s a discussion of a paper by Donoho, Johnstone, Hoch, and Stern. As I summarize: Under the “nearly black” model, the normal prior is terrible, the entropy prior is better and the exponential prior is slightly better still. (An even better prior distribution for the nearly black model would combine the threshold and […]