Skip to content
Archive of posts filed under the Zombies category.

The syllogism that ate social science

I’ve been thinking about this one for awhile and expressed it most recently in this blog comment: There’s the following reasoning which I’ve not seen explicitly stated but is I think how many people think. It goes like this: – Researcher does a study which he or she thinks is well designed. – Researcher obtains […]

Don’t do the Wilcoxon (reprise)

František Bartoš writes: I’ve read your and various others statistical books and from most of them, I gained a perception, that nonparametric tests aren’t very useful and are mostly a relic from pre-computer ages. However, this week I witnessed a discussion about this (in Psych. methods discussion group on FB) and most of the responses […]

The cargo cult continues

Juan Carlos Lopez writes: Here’s a news article: . . . Here’s the paper: . . . [Details removed to avoid embarrassing the authors of the article in question.] I [Lopez] am especially bothered by the abstract of this paper, which makes bold claims in the context of a small and noisy study which measurements […]

An Upbeat Mood May Boost Your Paper’s Publicity

Gur Huberman points to this news article, An Upbeat Mood May Boost Your Flu Shot’s Effectiveness, which states: A new study suggests that older people who are in a good mood when they get the shot have a better immune response. British researchers followed 138 people ages 65 to 85 who got the 2014-15 vaccine. […]

Fixing the reproducibility crisis: Openness, Increasing sample size, and Preregistration ARE NOT ENUF!!!!

In a generally reasonable and thoughtful post, “Yes, Your Field Does Need to Worry About Replicability,” Rich Lucas writes: One of the most exciting things to happen during the years-long debate about the replicability of psychological research is the shift in focus from providing evidence that there is a problem to developing concrete plans for […]

Don’t define reproducibility based on p-values

Lizzie Wolkovich writes: I just got asked to comment on this article [“Genotypic variability enhances the reproducibility of an ecological study,” by Alexandru Milcu et al. ]—I have yet to have time to fully sort out their stats but the first thing that hit me about it was they seem to be suggesting a way […]

The all-important distinction between truth and evidence

Yesterday we discussed a sad but all-too-familiar story of a little research project that got published and hyped beyond recognition. The published paper was called, “The more you play, the more aggressive you become: A long-term experimental study of cumulative violent video game effects on hostile expectations and aggressive behavior,” but actually that title was […]

More bad news in the scientific literature: A 3-day study is called “long term,” and nobody even seems to notice the problem. Whassup with that??

Someone pointed me to this article, “The more you play, the more aggressive you become: A long-term experimental study of cumulative violent video game effects on hostile expectations and aggressive behavior,” by Youssef Hasan, Laurent Bègue, Michael Scharkow, and Brad Bushman. My correspondent was suspicious of the error bars in Figure 1. I actually think […]

This April Fools post is dead serious

Usually for April 1st I schedule a joke post, something like: Why I don’t like Bayesian statistics, or Enough with the replication police, or Why tables are really much better than graphs, or Move along, nothing to see here, or A randomized trial of the set-point diet, etc. But today I have something so ridiculous […]

Replication is a good idea, but this particular replication is a bit too exact!

The following showed up in my email one day: From: Subject: Self-Plagarism in Current Opinion in Psychology Date: March 9, 2018 at 4:06:25 PM EST To: “gelman@stat.columbia.edu” Hello, You might be interested in the tremendous amount of overlap between two recent articles by Benjamin & Bushman (2016 & 2018) in Current Opinion in Psychology. The […]

Yet another IRB horror story

The IRB (institutional review board) is this weird bureaucracy, often staffed by helpful and well-meaning people but generally out of control, as it operates on an if-it’s-not-allowed-it’s-forbidden principle. As an example, Jonathan Falk points us to this Kafkaesque story from Scott Alexander, which ends up like this: Faced with submitting twenty-seven new pieces of paperwork […]

The purpose of a pilot study is to demonstrate the feasibility of an experiment, not to estimate the treatment effect

David Allison sent this along: – Press release from original paper: “The dramatic decrease in BMI, although unexpected in this short time frame, demonstrated that the [Shaping Healthy Choices Program] SHCP was effective . . .” – Comment on paper and call for correction or retraction: “. . . these facts show that the analyses […]

Reasons for an optimistic take on science: there are not “growing problems with research and publication practices.” Rather, there have been, and continue to be, huge problems with research and publication practices, but we’ve made progress in recognizing these problems.

Javier Benitez points us to an article by Daniele Fanelli, “Is science really facing a reproducibility crisis, and do we need it to?”, published in the Proceedings of the National Academy of Sciences, which begins: Efforts to improve the reproducibility and integrity of science are typically justified by a narrative of crisis, according to which […]

I fear that many people are drawing the wrong lessons from the Wansink saga, focusing on procedural issues such as “p-hacking” rather than scientifically more important concerns about empty theory and hopelessly noisy data. If your theory is weak and your data are noisy, all the preregistration in the world won’t save you.

Someone pointed me to this news article by Tim Schwab, “Brian Wansink: Data Masseur, Media Villain, Emblem of a Thornier Problem.” Schwab writes: If you look into the archives of your favorite journalism outlet, there’s a good chance you’ll find stories about Cornell’s “Food Psychology and Consumer Behavior” lab, led by marketing researcher Brian Wansink. […]

“and, indeed, that my study is consistent with X having a negative effect on Y.”

David Allison shares this article: Pediatrics: letter to the editor – Metformin for Obesity in Prepubertal and Pubertal Children A Randomized Controlled Trial and the authors’ reply: RE: Clarification of statistical interpretation in metformin trial paper The authors of the original paper were polite in their response, but they didn’t seem to get the point […]

No, I don’t believe that “Reduction in Firearm Injuries during NRA Annual Conventions” story

David Palmer writes: If you need yet another study to look at, check this out: “Reduction in Firearm Injuries during NRA Annual Conventions.”

Concerns about Brian Wansink’s claims and research methods have been known for years

1. The king and his memory There’s this stunning passage near the end of Josephine Tey’s classic The Daughter of Time. Most of the book is taken up with the main characters laboriously discovering the evidence that Richard III was not really a bad guy, he didn’t really kill those little princes, etc. Having made […]

I fear that many people are drawing the wrong lessons from the Wansink saga, focusing on procedural issues such as “p-hacking” rather than scientifically more important concerns about empty theory and hopelessly noisy data. If your theory is weak and your data are noisy, all the preregistration in the world won’t save you.

This came up in the discussion of yesterday’s post. We’ve discussed theory and measurement in this space before. And here’s a discussion of how the problems of selection bias are magnified when measurements are noisy. Forking paths and p-hacking do play a role in this story: forking paths (multiple potential analyses on a given experiment) […]

Big Oregano strikes again

Paul Alper writes: You recall the University of Maryland chocolate milk cure for concussion [Bigmilk Strikes Again]. A new version of the same sloppiness is discussed here. Alper is linking to a news article, “University of Iowa ignores questions about its oregano ‘cure’ for cancer-wasting syndrome,” by Eric Holland, who writes: At the beginning of […]

Anybody want a drink before the war?

Your lallies look like darts, and you’ve got nanti carts, but I love your bona eke – Lee Sutton (A near miss) I’ve been thinking about gayface again. I guess this is for a bunch of reasons, but one of the lesser ones is that this breathless article by JD Schramm popped up in the Washington Post the other […]