Skip to content
Archive of posts filed under the Zombies category.

What happens to your career when you have to retract a paper?

In response to our recent post on retractions, Josh Krieger sends along two papers he worked on with Pierre Azoulay, Jeff Furman, Fiona Murray, and Alessandro Bonatti. Krieger writes, “Both papers are about the spillover effects of retractions on other work. Turns out retractions are great for identification!” Paper #1: “The career effects of scandal: […]

I think they use witchcraft

The following came in the email today: On Jul 7, 2018, at 12:58 PM, Submissions <submissions@**> wrote: Hello Dr. Andrew Gelman, I am Dr. ** [American-sounding name], Research Assistant for the ** Publishing Company contacting you with reference from our Editorial Board. Are you tired of publishing your Manuscript in useless journals and get no […]

Tutorial: The practical application of complicated statistical methods to fill up the scientific literature with confusing and irrelevant analyses

James Coyne pointed me with distress or annoyance to this new paper, “Tutorial: The Practical Application of Longitudinal Structural Equation Mediation Models in Clinical Trials,” by K. A. Goldsmith, D. P. MacKinnon, T. Chalder, P. D. White, M. Sharpe, and A. Pickles. This is the team behind the PACE trial for systemic exercise intolerance disease. […]

On this 4th of July, let’s declare independence from “95%”

Plan your experiment, gather your data, do your inference for all effects and interactions of interest. When all is said and done, accept some level of uncertainty in your conclusions: you might not be 97.5% sure that the treatment effect is positive, but that’s fine. For one thing, decisions need to be made. You were […]

The “Psychological Science Accelerator”: it’s probably a good idea but I’m still skeptical

Asher Meir points us to this post by Christie Aschwanden entitled, “Can Teamwork Solve One Of Psychology’s Biggest Problems?”, which begins: Psychologist Christopher Chartier admits to a case of “physics envy.” That field boasts numerous projects on which international research teams come together to tackle big questions. Just think of CERN’s Large Hadron Collider or […]

Josh “hot hand” Miller speaks at Yale tomorrow (Wed) noon

Should be fun.

Nooooooooooooooooo! (dentists named Dennis, still appearing in 2018)

From today’s NYT: Another finding of note, published in the Journal of Personality and Social Psychology in 2002, is that people gravitate toward places of residence and occupations that resemble their own names. So, the researchers assert, a higher proportion of men named Louis live in St. Louis than would occur at random, and a […]

All Fools in a Circle

A graduate student in psychology writes: Grants do not fund you unless you have pilot data – and moreover – show some statistically significant finding in your N of 20 or 40 – in essence trying to convince the grant reviewers that there is “something there” worth them providing your lab lots of money to […]

“Not statistically significant” != 0, stents edition

Doug Helmreich writes: OK, I work at a company that is involved in stents, so I’m not unbiased, but… and especially The research design is pretty cool—placebo participants got a sham surgery with no stent implanted. The results show that people with the stent did have better metrics than those with just the […]

Some experiments are just too noisy to tell us much of anything at all: Political science edition

Sointu Leikas pointed us to this published research article, “Exposure to inequality affects support for redistribution.” Leikas writes that “it seems to be a really apt example of “researcher degrees of freedom.’” Here’s the abstract of the paper: As the world’s population grows more urban, encounters between members of different socioeconomic groups occur with greater […]

Write your congressmember to require researchers to publicly post their code?

Stephen Cranney writes: For the past couple of years I have had an ongoing question/concern . . . In my fields (sociology and demography) much if not most of the published research is based on publicly available datasets; consequently, replicability is literally a simple matter of sending or uploading a few kilobytes of code text. […]

No, there is no epidemic of loneliness. (Or, Dog Bites Man: David Brooks runs another column based on fake stats)

[adorable image] Remember David Brooks? The NYT columnist, NPR darling, and former reporter who couldn’t correctly report the price of a meal at Red Lobster? The guy who got it wrong about where billionaires come from and who thought it was fun to use one of his columns to make fun of a urologist (ha […]

“Eureka bias”: When you think you made a discovery and then you don’t want to give it up, even if it turns out you interpreted your data wrong

This came in the email one day: I am writing to you with my own (very) small story of error-checking a published finding. If you end up posting any of this, please remove my name! A few years ago, a well-read business journal published an article by a senior-level employee at my company. One of […]

Another U.S. government advisor from Columbia University!

Cool! We’ve had Alexander Hamilton, John Jay, Dwight Eisenhower, Richard Clarida, Jeff Sachs, those guys from the movie Inside Job, and now . . . Dr. Oz. Government service at its finest. The pizzagate guy was from Cornell, though.

Doomsday! Problems with interpreting a confidence interval when there is no evidence for the assumed sampling model

Mark Brown pointed me to a credulous news article in the Washington Post, “We have a pretty good idea of when humans will go extinct,” which goes: A Princeton University astrophysicist named J. Richard Gott has a surprisingly precise answer to that question . . . to understand how he arrived at it and what […]

“We continuously increased the number of animals until statistical significance was reached to support our conclusions” . . . I think this is not so bad, actually!

Jordan Anaya pointed me to this post, in which Casper Albers shared this snippet from a recently-published paper from an article in Nature Communications: The subsequent twitter discussion is all about “false discovery rate” and statistical significance, which I think completely misses the point. The problems Before I get to why I think the quoted […]

Early p-hacking investments substantially boost adult publication record

In a post with the title “Overstated findings, published in Science, on long-term health effects of a well-known early childhood program,” Perry Wilson writes: In this paper [“Early Childhood Investments Substantially Boost Adult Health,” by Frances Campbell, Gabriella Conti, James Heckman, Seong Hyeok Moon, Rodrigo Pinto, Elizabeth Pungello, and Yi Pan], published in Science in […]

The syllogism that ate social science

I’ve been thinking about this one for awhile and expressed it most recently in this blog comment: There’s the following reasoning which I’ve not seen explicitly stated but is I think how many people think. It goes like this: – Researcher does a study which he or she thinks is well designed. – Researcher obtains […]

Don’t do the Wilcoxon (reprise)

František Bartoš writes: I’ve read your and various others statistical books and from most of them, I gained a perception, that nonparametric tests aren’t very useful and are mostly a relic from pre-computer ages. However, this week I witnessed a discussion about this (in Psych. methods discussion group on FB) and most of the responses […]

The cargo cult continues

Juan Carlos Lopez writes: Here’s a news article: . . . Here’s the paper: . . . [Details removed to avoid embarrassing the authors of the article in question.] I [Lopez] am especially bothered by the abstract of this paper, which makes bold claims in the context of a small and noisy study which measurements […]