Skip to content
Archive of posts filed under the Zombies category.

Write your congressmember to require researchers to publicly post their code?

Stephen Cranney writes: For the past couple of years I have had an ongoing question/concern . . . In my fields (sociology and demography) much if not most of the published research is based on publicly available datasets; consequently, replicability is literally a simple matter of sending or uploading a few kilobytes of code text. […]

No, there is no epidemic of loneliness. (Or, Dog Bites Man: David Brooks runs another column based on fake stats)

[adorable image] Remember David Brooks? The NYT columnist, NPR darling, and former reporter who couldn’t correctly report the price of a meal at Red Lobster? The guy who got it wrong about where billionaires come from and who thought it was fun to use one of his columns to make fun of a urologist (ha […]

“Eureka bias”: When you think you made a discovery and then you don’t want to give it up, even if it turns out you interpreted your data wrong

This came in the email one day: I am writing to you with my own (very) small story of error-checking a published finding. If you end up posting any of this, please remove my name! A few years ago, a well-read business journal published an article by a senior-level employee at my company. One of […]

Another U.S. government advisor from Columbia University!

Cool! We’ve had Alexander Hamilton, John Jay, Dwight Eisenhower, Richard Clarida, Jeff Sachs, those guys from the movie Inside Job, and now . . . Dr. Oz. Government service at its finest. The pizzagate guy was from Cornell, though.

Doomsday! Problems with interpreting a confidence interval when there is no evidence for the assumed sampling model

Mark Brown pointed me to a credulous news article in the Washington Post, “We have a pretty good idea of when humans will go extinct,” which goes: A Princeton University astrophysicist named J. Richard Gott has a surprisingly precise answer to that question . . . to understand how he arrived at it and what […]

“We continuously increased the number of animals until statistical significance was reached to support our conclusions” . . . I think this is not so bad, actually!

Jordan Anaya pointed me to this post, in which Casper Albers shared this snippet from a recently-published paper from an article in Nature Communications: The subsequent twitter discussion is all about “false discovery rate” and statistical significance, which I think completely misses the point. The problems Before I get to why I think the quoted […]

Early p-hacking investments substantially boost adult publication record

In a post with the title “Overstated findings, published in Science, on long-term health effects of a well-known early childhood program,” Perry Wilson writes: In this paper [“Early Childhood Investments Substantially Boost Adult Health,” by Frances Campbell, Gabriella Conti, James Heckman, Seong Hyeok Moon, Rodrigo Pinto, Elizabeth Pungello, and Yi Pan], published in Science in […]

The syllogism that ate social science

I’ve been thinking about this one for awhile and expressed it most recently in this blog comment: There’s the following reasoning which I’ve not seen explicitly stated but is I think how many people think. It goes like this: – Researcher does a study which he or she thinks is well designed. – Researcher obtains […]

Don’t do the Wilcoxon (reprise)

František Bartoš writes: I’ve read your and various others statistical books and from most of them, I gained a perception, that nonparametric tests aren’t very useful and are mostly a relic from pre-computer ages. However, this week I witnessed a discussion about this (in Psych. methods discussion group on FB) and most of the responses […]

The cargo cult continues

Juan Carlos Lopez writes: Here’s a news article: . . . Here’s the paper: . . . [Details removed to avoid embarrassing the authors of the article in question.] I [Lopez] am especially bothered by the abstract of this paper, which makes bold claims in the context of a small and noisy study which measurements […]

An Upbeat Mood May Boost Your Paper’s Publicity

Gur Huberman points to this news article, An Upbeat Mood May Boost Your Flu Shot’s Effectiveness, which states: A new study suggests that older people who are in a good mood when they get the shot have a better immune response. British researchers followed 138 people ages 65 to 85 who got the 2014-15 vaccine. […]

Fixing the reproducibility crisis: Openness, Increasing sample size, and Preregistration ARE NOT ENUF!!!!

In a generally reasonable and thoughtful post, “Yes, Your Field Does Need to Worry About Replicability,” Rich Lucas writes: One of the most exciting things to happen during the years-long debate about the replicability of psychological research is the shift in focus from providing evidence that there is a problem to developing concrete plans for […]

Don’t define reproducibility based on p-values

Lizzie Wolkovich writes: I just got asked to comment on this article [“Genotypic variability enhances the reproducibility of an ecological study,” by Alexandru Milcu et al. ]—I have yet to have time to fully sort out their stats but the first thing that hit me about it was they seem to be suggesting a way […]

The all-important distinction between truth and evidence

Yesterday we discussed a sad but all-too-familiar story of a little research project that got published and hyped beyond recognition. The published paper was called, “The more you play, the more aggressive you become: A long-term experimental study of cumulative violent video game effects on hostile expectations and aggressive behavior,” but actually that title was […]

More bad news in the scientific literature: A 3-day study is called “long term,” and nobody even seems to notice the problem. Whassup with that??

Someone pointed me to this article, “The more you play, the more aggressive you become: A long-term experimental study of cumulative violent video game effects on hostile expectations and aggressive behavior,” by Youssef Hasan, Laurent Bègue, Michael Scharkow, and Brad Bushman. My correspondent was suspicious of the error bars in Figure 1. I actually think […]

This April Fools post is dead serious

Usually for April 1st I schedule a joke post, something like: Why I don’t like Bayesian statistics, or Enough with the replication police, or Why tables are really much better than graphs, or Move along, nothing to see here, or A randomized trial of the set-point diet, etc. But today I have something so ridiculous […]

Replication is a good idea, but this particular replication is a bit too exact!

The following showed up in my email one day: From: Subject: Self-Plagarism in Current Opinion in Psychology Date: March 9, 2018 at 4:06:25 PM EST To: “gelman@stat.columbia.edu” Hello, You might be interested in the tremendous amount of overlap between two recent articles by Benjamin & Bushman (2016 & 2018) in Current Opinion in Psychology. The […]

Yet another IRB horror story

The IRB (institutional review board) is this weird bureaucracy, often staffed by helpful and well-meaning people but generally out of control, as it operates on an if-it’s-not-allowed-it’s-forbidden principle. As an example, Jonathan Falk points us to this Kafkaesque story from Scott Alexander, which ends up like this: Faced with submitting twenty-seven new pieces of paperwork […]

The purpose of a pilot study is to demonstrate the feasibility of an experiment, not to estimate the treatment effect

David Allison sent this along: – Press release from original paper: “The dramatic decrease in BMI, although unexpected in this short time frame, demonstrated that the [Shaping Healthy Choices Program] SHCP was effective . . .” – Comment on paper and call for correction or retraction: “. . . these facts show that the analyses […]

Reasons for an optimistic take on science: there are not “growing problems with research and publication practices.” Rather, there have been, and continue to be, huge problems with research and publication practices, but we’ve made progress in recognizing these problems.

Javier Benitez points us to an article by Daniele Fanelli, “Is science really facing a reproducibility crisis, and do we need it to?”, published in the Proceedings of the National Academy of Sciences, which begins: Efforts to improve the reproducibility and integrity of science are typically justified by a narrative of crisis, according to which […]