I fear that many people are drawing the wrong lessons from the Wansink saga, focusing on procedural issues such as “p-hacking” rather than scientifically more important concerns about empty theory and hopelessly noisy data. If your theory is weak and your data are noisy, all the preregistration in the world won’t save you.

Someone pointed me to this news article by Tim Schwab, “Brian Wansink: Data Masseur, Media Villain, Emblem of a Thornier Problem.” Schwab writes: If you look into the archives of your favorite journalism outlet, there’s a good chance you’ll find … Continue reading

“From that perspective, power pose lies outside science entirely, and to criticize power pose would be a sort of category error, like criticizing The Lord of the Rings on the grounds that there’s no such thing as an invisibility ring, or criticizing The Rotter’s Club on the grounds that Jonathan Coe was just making it all up.”

From last year: One could make the argument that power pose is innocuous, maybe beneficial in that it is a way of encouraging people to take charge of their lives. And this may be so. Even if power pose itself … Continue reading

“Statistics textbooks (including mine) are part of the problem, I think, in that we just set out ‘theta’ as a parameter to be estimated, without much reflection on the meaning of ‘theta’ in the real world.”

Carol Nickerson pointed me to a new article by Arie Kruglanski, Marina Chernikova, Katarzyna Jasko, entitled, “Social psychology circa 2016: A field on steroids.” I wrote: 1. I have no idea what is the meaning of the title of the … Continue reading

“Bombshell” statistical evidence for research misconduct, and what to do about it?

Someone pointed me to this post by Nick Brown discussing a recent article by John Carlisle regarding scientific misconduct. Here’s Brown: [Carlisle] claims that he has found statistical evidence that a surprisingly high proportion of randomised controlled trials (RCTs) contain … Continue reading