Skip to content
Archive of posts filed under the Zombies category.

How does a Nobel-prize-winning economist become a victim of bog-standard selection bias?

Someone who wishes to remain anonymous writes in with a story: Linking to a new paper by Jorge Luis García, James J. Heckman, and Anna L. Ziff, an economist Sue Dynarski makes this “joke” on facebook—or maybe it’s not a joke: How does one adjust standard errors to account for the fact that N of […]

Should we continue not to trust the Turk? Another reminder of the importance of measurement

From 2013: Don’t trust the Turk From 2017 (link from Kevin Lewis), from Jesse Chandler and Gabriele Paolacci: The Internet has enabled recruitment of large samples with specific characteristics. However, when researchers rely on participant self-report to determine eligibility, data quality depends on participant honesty. Across four studies on Amazon Mechanical Turk, we show that […]

Daryl Bem and Arthur Conan Doyle

Daniel Engber wrote an excellent news article on the replication crisis, offering a historically-informed perspective similar to my take in last year’s post, “What has happened down here is the winds have changed.” The only thing I don’t like about Engber’s article is its title, “Daryl Bem Proved ESP Is Real. Which means science is […]

Further criticism of social scientists and journalists jumping to conclusions based on mortality trends

[cat picture] So. We’ve been having some discussion regarding reports of the purported increase in mortality rates among middle-aged white people in America. The news media have mostly spun a simple narrative of struggling working-class whites, but there’s more to the story. Some people have pointed me to some contributions from various sources: In “The […]

Bigshot psychologist, unhappy when his famous finding doesn’t replicate, won’t consider that he might have been wrong; instead he scrambles furiously to preserve his theories

Kimmo Eriksson writes: I am a Swedish math professor turned cultural evolutionist and psychologist (and a fan of your blog). I am currently working on a topic that might interest you (why public opinion moves on some issues but not on others), but that’s for another day. Hey—I’m very interested in why public opinion moves […]

Plan 9 from PPNAS

[cat picture] Asher Meir points to this breathless news article and sends me a message, subject line “Fruit juice leads to 0.003 unit (!) increase in BMI”: “the study results showed that one daily 6- to 8-ounce serving increment of 100% fruit juice was associated with a small .003 unit increase in body mass index […]

Again: Let’s stop talking about published research findings being true or false

Coincidentally, on the same day this post appeared, a couple people pointed me to a news article by Paul Basken entitled, “A New Theory on How Researchers Can Solve the Reproducibility Crisis: Do the Math.” This is not good.

Best correction ever: “Unfortunately, the correct values are impossible to establish, since the raw data could not be retrieved.”

Commenter Erik Arnesen points to this: Several errors and omissions occurred in the reporting of research and data in our paper: “How Descriptive Food Names Bias Sensory Perceptions in Restaurants,” Food Quality and Preference (2005) . . . The dog ate my data. Damn gremlins. I hate when that happens. As the saying goes, “Each […]

After Peptidegate, a proposed new slogan for PPNAS. And, as a bonus, a fun little graphics project.

Someone pointed me to this post by “Neuroskeptic”: A new paper in the prestigious journal PNAS contains a rather glaring blooper. . . . right there in the abstract, which states that “three neuropeptides (β-endorphin, oxytocin, and dopamine) play particularly important roles” in human sociality. But dopamine is not a neuropeptide. Neither are serotonin or […]

Not everyone’s aware of falsificationist Bayes

Stephen Martin writes: Daniel Lakens recently blogged about philosophies of science and how they relate to statistical philosophies. I thought it may be of interest to you. In particular, this statement: From a scientific realism perspective, Bayes Factors or Bayesian posteriors do not provide an answer to the main question of interest, which is the […]

Pizzagate gets even more ridiculous: “Either they did not read their own previous pizza buffet study, or they do not consider it to be part of the literature . . . in the later study they again found the exact opposite, but did not comment on the discrepancy.”

Background Several months ago, Jordan Anaya​, Tim van der Zee, and Nick Brown reported that they’d uncovered 150 errors in 4 papers published by Brian Wansink, a Cornell University business school professor and who describes himself as a “world-renowned eating behavior expert for over 25 years.” 150 errors is pretty bad! I make mistakes myself […]

Why I’m not participating in the Transparent Psi Project

I received the following email from psychology researcher Zoltan Kekecs: I would like to ask you to participate in the establishment of the expert consensus design of a large scale fully transparent replication of Bem’s (2011) ‘Feeling the future’ Experiment 1. Our initiative is called the ‘Transparent Psi Project’. [https://osf.io/jk2zf/wiki/home/] Our aim is to develop […]

Financial anomalies are contingent on being unknown

Jonathan Falk points us to this article by Kewei Hou, Chen Xue, and Lu Zhang, who write: In retrospect, the anomalies literature is a prime target for p-hacking. First, for decades, the literature is purely empirical in nature, with little theoretical guidance. Second, with trillions of dollars invested in anomalies-based strategies in the U.S.market alone, […]

The (Lance) Armstrong Principle

If you push people to promise more than they can deliver, they’re motivated to cheat.

Another serious error in my published work!

Uh oh, I’m starting to feel like that pizzagate guy . . . Here’s the background. When I talk about my serious published errors, I talk about my false theorem, I talk about my empirical analysis that was invalidated by miscoded data, I talk my election maps whose flaws were pointed out by an angry […]

All the things we have to do that we don’t really need to do: The social cost of junk science

I’ve been thinking a lot about junk science lately. Some people have said it’s counterproductive or rude of me to keep talking about the same few examples (actually I think we have about 15 or so examples that come up again and again), so let me just speak generically about the sort of scientific claim […]

Some natural solutions to the p-value communication problem—and why they won’t work.

John Carlin and I write: It is well known that even experienced scientists routinely misinterpret p-values in all sorts of ways, including confusion of statistical and practical significance, treating non-rejection as acceptance of the null hypothesis, and interpreting the p-value as some sort of replication probability or as the posterior probability that the null hypothesis […]

What is needed to do good research (hint: it’s not just the avoidance of “too much weight given to small samples, a tendency to publish positive results and not negative results, and perhaps an unconscious bias from the researchers themselves”)

[cat picture] In a news article entitled, “No, Wearing Red Doesn’t Make You Hotter,” Dalmeet Singh Chawla recounts the story of yet another Psychological Science / PPNAS-style study (this one actually appeared back in 2008 in Journal of Personality and Social Psychology, the same prestigious journal which published Daryl Bem’s ESP study a couple years […]

Mockery is the best medicine

[cat picture] I’m usually not such a fan of twitter, but Jeff sent me this, from Andy Hall, and it’s just hilarious: The background is here. But Hall is missing a few key determinants of elections and political attitudes: subliminal smiley faces, college football, fat arms, and, of course, That Time of the Month. You […]

“P-hacking” and the intention-to-cheat effect

I’m a big fan of the work of Uri Simonsohn and his collaborators, but I don’t like the term “p-hacking” because it can be taken to imply an intention to cheat. The image of p-hacking is of a researcher trying test after test on the data until reaching the magic “p less than .05.” But, […]