The connection between junk science and sloppy data handling: Why do they go together?

Nick Brown pointed me to a new paper, “The Impact of Incidental Environmental Factors on Vote Choice: Wind Speed is Related to More Prevention-Focused Voting,” to which his reaction was, “It makes himmicanes look plausible.” Indeed, one of the authors … Continue reading

What data to include in an analysis? Not always such an easy question. (Elliott Morris / Nate Silver / Rasmussen polls edition)

Someone pointed me to a recent post by Nate Silver, “Polling averages shouldn’t be political litmus tests, and they need consistent standards, not make-it-up-as-you-go,” where Nate wrote: The new Editorial Director of Data Analytics at ABC News, G. Elliott Morris, … Continue reading

They came in through the window: The migration of tech hype from the fringes to the media and academic mainstream

Palko points to a ten-year-old post on 3-D printing. Here he is back in 2013: We’re talking about people (particularly journalists) who have an emotional, gee-whiz reaction to technology without really thinking seriously about the functionality. [They] can be spotted … Continue reading

Joe Simmons, Leif Nelson, and Uri Simonsohn agree with us regarding the much publicized but implausible and unsubstantiated claims of huge effects from nudge interventions

We wrote about this last year in our post, PNAS GIGO QRP WTF: This meta-analysis of nudge experiments is approaching the platonic ideal of junk science and our followup PNAS article, No reason to expect large and consistent effects of … Continue reading

Gaurav Sood’s review of the book Noise by Kahneman et al.: In writing about noise in human judgment, the authors didn’t wrestle with the problem of noise in behavioral-science research. But behavioral-science research is the product of human judgment.

Here it is. This should interest some of you. Gaurav makes a convincing case that: 1. The main topic of the book—capriciousness in human judgment—is important, it’s worth a book, and the authors (Kahneman, Sibony, and Sunstein) have an interesting … Continue reading

The real problem of that nudge meta-analysis is not that it includes 12 papers by noted fraudsters; it’s the GIGO of it all

A few days ago we discussed a meta-analysis that was published on nudge interventions. The most obvious problem of that analysis was that included 11 papers by Brian Wansink and 1 paper by Dan Ariely, and for good reasons we … Continue reading