Under the heading, “Yet another bad analysis making the rounds,” John Mount writes:
This won’t waste much of your time—because there really isn’t much there. But I thought you would be disturbed by this new paper. Here’s my (Mount’s) commentary on what we can surmise about the methods.
Mount is pretty scathing. He starts with a bang:
The following article is getting quite a lot of press right now: David Just and Brian Wansink (2015), “Fast Food, Soft Drink, and Candy Intake is Unrelated to Body Mass Index for 95% of American Adults”, Obesity Science & Practice, forthcoming (upcoming in a new pay for placement journal). Obviously it is a senstational contrary position (some coverage: here, here, and here).
I thought I would take a peek to learn about the statistical methodology (see here for some commentary). I would say the kindest thing you can say about the paper is: its problems are not statistical.
At this time the authors don’t seem to have supplied their data preparation or analysis scripts and the paper “isn’t published yet” (though they have had time for a press release), so we have to rely on their pre-print. Read on for excerpts from the work itself (with commentary).
He continues from there.
The media outlets who took the bait and ran with the press release are Fortune, MedicalXpress, and Pacific Standard. Fortune is a shell of its former self and will probably run with just about any content that is supplied to them, MedicalXpress does not appear to be a real anything, and Pacific Standard we’ve already discussed as a serious media outlet with old-school science reporting of the heroic researcher variety. Last time we noted a Pacific Standard series that described itself as follows:
Findings is a daily column by Pacific Standard staff writer Tom Jacobs, who scours the psychological-research journals to discover new insights into human behavior, ranging from the origins of our political beliefs to the cultivation of creativity.
And this latest hyped study appears in this column:
Quick Studies is an award-winning series that sheds light on new research and discoveries that change the way we look at the world.
That’s right: a magazine with two separate columns about new insights, research, and discoveries. At this point they can’t just mine Psychological Science and PPNAS, they have to dip a bit lower into the publication pool.
But at this point there are so many media outlets that I guess any junk study will get coverage. Given that the paper is appearing in a low-reputation journal, I guess the credibility is coming from Cornell’s reputation. I see that the senior author of this paper is “the John S. Dyson Professor of Marketing in the Charles H. Dyson School of Applied Economics & Management at Cornell University and is the Director of the Cornell Food and Brand Lab. He is the author of the best-selling book Mindless Eating: Why We Eat More Than We Think (Bantam Dell 2006). Between 2007 and 2009 he was the Executive Director of the Center for Nutrition Policy and Promotion in Washington DC, leading the development of the USDA 2010 Dietary Guidelines. He is the President Elect of the Society for Nutrition Education.” And the junior author is “a professor and Director of Graduate Studies in the Charles H. Dyson School of Applied Economics and Management at Cornell University. In addition he serves as co-director of the Cornell Center for Behavioral Economics in Child Nutrition Programs.” So I guess these guys are well respected.
Maybe the issue is that, once you’re an expert, you start to believe your own theories a bit too much, and evidence becomes just a way to support a theory rather than a way to learn. Or maybe they’re just bad at statistics and have ventured beyond their research competence. In any case it’s too bad they had to drag their public relations department and Cornell University into this mess. Dragging fine reputations into the dirt just for a publication in Obesity Science & Practice and a quick blurb in Pacific Standard. It’s hardly worth it.
Then again, Daryl Bem is a Cornell professor so it’s not like that institution upholds the highest standards in quantitative research.
P.S. Perhaps it’s worth emphasizing that I have no reason to think these researchers are doing anything unethical. I’d guess it’s simple incompetence. Statistics is hard, and it doesn’t help when you’re already the Mr. Big Professor and the Director of the Bigtime Lab—then it’s easy to believe your own hype. Maybe they should’ve taken a hint when all the good journals rejected their paper—you don’t think Obesity Science & Practice was their first choice, do you?—but then again I get papers rejected from good journals all the time, and my usual instinct is to blame the %$%^*%^* reviewers, not to question the quality of my own work. And of course the editors at Pacific Standard are busy trying to fill up their magazine, and the author of the news article wants to get published, and this is what he knows how to do. And we can hardly blame the P.R. professionals who helped with the press release; that’s their job. Each player in the hype cycle plays his role.
Nobody’s a bad guy here. But the result—that is bad. And if researchers get reputational bumps from publishing high-quality work, and if they get reputational bumps from public dissemination of high-quality work—and I think they should get such benefits—then damn straight their reputation should take a hit if they publish and promote crap. Fair is fair. To not slam people for low-quality work is implicitly to hurt all the serious researchers out there who don’t just publish anything, who don’t hype their work, who are careful maybe to run their statistics by an expert (yes, Cornell has many excellent statisticians) rather than trying to sneak substandard work into print. I show my respect for researchers who show care, by not going easy on those who don’t.
P.P.S. Just to clarify one more step: I have nothing personal on these researchers, neither of whom I’d ever heard about before. There’s no reason for one bad paper to count more than two illustrious careers. All of have our bad days and even our bad research projects. Should we judge Isaac Newton based on his work on alchemy, or John Maynard Keynes for his advocacy of the gold standard, or Jean Piaget on his work on embodied cognition, or Niall Ferguson for whatever outrageous thing he said last week? No, of course not. I’m sure if you go through my published papers you’ll find a false theorem or two. In all seriousness, I have no reason to doubt that Brian Wansink’s service to the USDA was illustrious or that the students at Cornell are lucky to have him and David Just as instructors and research leaders. And why shouldn’t a couple of business-school marketing professors be giving out nutritional advice? Not every research project involves statistical inference, and if these people are subject-matter experts, that’s fine. No need to judge all their work based on one incident.