Skip to content

Pizzagate update! Response from the Cornell University Media Relations Office

Hey! A few days ago I received an email from the Cornell University Media Relations Office. As I reported in this space, I responded as follows:

Dear Cornell University Media Relations Office:

Thank you for pointing me to these two statements. Unfortunately I fear that you are minimizing the problem.

You write, “while numerous instances of inappropriate data handling and statistical analysis in four published papers were alleged, such errors did not constitute scientific misconduct ( However, given the number of errors cited and their repeated nature, we established a process in which Professor Wansink would engage external statistical experts to validate his review and reanalysis of the papers and attendant published errata. . . . Since the original critique of Professor Wansink’s articles, additional instances of self-duplication have come to light. Professor Wansink has acknowledged the repeated use of identical language and in some cases dual publication of materials.”

But there are many, many more problems in Wansink’s published work, beyond those 4 initially-noticed papers and beyond self-duplication.

Your NIH link above defines research misconduct as “fabrication, falsification and plagiarism, and does not include honest error or differences of opinion. . .” and defines falsification as “Manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.”

This phrase, “changing or omitting data or results such that the research is not accurately represented in the research record,” is an apt description of much of Wansink’s work, going far beyond those four particular papers that got the ball rolling, and far beyond duplication of materials. For a thorough review, see this recent post by Tim van der Zee, who points to 37 papers by Wansink, many of which have serious data problems:

And all this doesn’t even get to criticism of Wansink having openly employed a hypotheses-after-results-are-known methodology which leaves his statistics meaningless, even setting aside data errors.

There’s also Wansink’s statement which refers to “the great work of the Food and Brand Lab,” which is an odd phrase to use to describe a group that has published papers with hundreds of errors and major massive data inconsistencies that represent, at worst, fraud, and, at best, some of the sloppiest empirical work—published or unpublished—that I have ever seen. In either case, I consider this pattern of errors to represent research misconduct.

I understand that it’s natural to think that nothing can every be proven, Rashomon and all that. But in this case the evidence for research misconduct is all out in the open, in dozens of published papers.

I have no personal stake in this matter and I have no plans to file any sort of formal complaint. But as a scientist, this bothers me: Wansink’s misconduct, his continuing attempt to minimize it, and this occurring at a major university.

Andrew Gelman

Let me emphasize at this point that the Cornell University Media Relations Office has no obligation to respond to me. They’re already pretty busy, what with all the Fox News crews coming on campus, not to mention the various career-capping studies that happen to come through. Just cos the Cornell University Media Relations Office sent me an email, this implies no obligation on their part to reply to my response.

Anyway, that all said, I thought you might be interested in what the Cornell University Media Relations Office had to say.

So, below, here is their response, in its entirety:





  1. anon says:

    Response is missing

  2. Jordan Anaya says:

    I actually did get a response when I replied, albeit an automatic reply:


    I am out of the office and on the road today (April 5, 2017) with limited access to email. If you have an urgent Media Relations need, please contact my colleague Melissa Osgood at …”

    It’s a little convenient for them to send out a mass email and be out office so they can’t reply, but at least I’m considered a friend of Cornell!

  3. Dale Lehman says:

    It’s not April 1 anymore, so I’m not sure what to make out of this. Either Andrew forgot to put the rest of the story, or this IS the story. I am assuming the latter, but if that is the case, what is the point of this post? Believe me, I empathize with the frustration about getting no response when I think there should be, but at this point this seems unnecessary. We already know that Cornell media relations just wishes this would all go away. So, isn’t a lack of response just more of the same?

    If you want a new story about lack of response, here’s one. I have some health issues that are personal but also professional. It concerns a highly publicized study, the ProtecT ( study that has some new finding about prostate cancer treatment. The study randomized people into 3 groups – one surgical, one radiation therapy, and one active monitoring. There are very few deaths and the study did not find a significant difference between the 3 groups. However, buried in a supplementary table was the fact that most of the people randomized into the more aggressive treatment actually denied the treatment and resorted to active monitoring (the treatments are not fun). If they were analyzed according to the treatment they actually received rather than the group they were randomized into, then the results are somewhat different.

    I applied to get the data – to their credit, there was an online form to request the data from the NHS in the UK where the study was done. They denied my request, stating

    “I have been in contact with the ProtecT study’s principal investigators and unfortunately we are unable to comply with your request because our own research team are in the process of analysising this data.”

    It is certainly their right to deny my request, but the reason seems unsatisfactory to me. The data is a public good (in economic terms, not legal terms). Giving me access to the data in no way prevents their analysis of the same data. So, I appealed the decision. They said they would send my appeal to the study’s principal investigators. Their complete response follows:

    • The data appear to be publicly funded, so it is within your rights to press them further with your request. There may be an embargo period, as well as restricted access considerations, but I don’t see why you couldn’t review an anonymized dataset to check their work.

    • Keith O’Rourke says:

      The times might be changing but not fast enough

      “Patients, meanwhile, appeared baffled at the squabbling scientists.

      “When I came here, I had no idea there was any controversy about sharing the data,” said Moses Taylor, who spoke on a patients’ panel Tuesday, and was also a participant in a National Institutes of Health-funded trial on blood pressure treatment. “I also did not realize that people that start these trials and carry them through, the only way they get recognized is publications and that publications determine their career.”

      Moses said that it’s up for the scientists in the room to figure out a way to make the data available. The patients agreed that the data should be shared: early, often, and responsibly.”

      • Dale Lehman says:

        That story was from the recent competition held by the New England Journal of Medicine, followed by a two day conference and webinar about data sharing. I attended many of the online sessions and the disparity between patients (participants in clinical trials) and the trialists was striking. Patients just assumed the data was being shared – and that was part of their motivation (only part) in participating. The medical establishment was far more wary – for some legitimate reasons, but mostly in protection of their career paths. The impediments to data sharing are mostly human-created. In my view, they are mostly counterproductive, though there are some strong voices that believe clinical data must be kept out of public hands for a variety of reasons.

    • Andrew says:


      This post is the story. And, yeah, if they’re gonna send me their first email, I do think they should respond to my reply. They’re not obliged to, but they’d be doing their job better if they were to consider the possibility that there’s more to the story than they wanted to admit.

  4. Anonymous says:

    I don’t understand this. Did Andrew neglect to include the response from Cornell, or is it the case that there was no response?

  5. grumbler says:

    Nice clickbait.

  6. Jonathan says:

    Wansink’s work is not only sloppy by scientific standards but by its nature is a series of small n trials in which a context is tweaked to see if that changes the behavior of the subjects. The results are thus highly context sensitive and from them he pulls lessons or morals which rely flimsily on these low powered, noisy trials. Example: he tested stale popcorn at the movies and found that people will eat more stale popcorn when given large containers than small and from that he draws a lesson that large containers lead you to eat more, even if what you’re eating absolutely sucks. This fits what we know from many sources and thus plays to our priors, as in the food industry tests not to find what people like the most but rather for what people will eat more of and that turns out to be something less than most flavorful because they believe people feel more satifisfied faster when food is more flavorful. All of his work fits into this exact category: work that confirms what we already know but done in a reasonably clever manner. This means he’s working from a widely shared model. He may even be falsifying his data for all I know because the model is that widely shared: thin people don’t put as much on their plates, don’t buy Big Gulps at 7/11, don’t start at the dessert end of a buffet. We could easily pretend to have a room where we cleared the plates more often and handed out fresh ones to see if that increased consumption – and I can imagine this being yes or no depending on the context, like more chicken wings when there’s a pile of them near you sure but you clear that main course dinner plate from the table and most people simply wait for dessert (and say, but I wasn’t done!).

    Wansink’s books are really compendiums of sensible advice: use a smaller plate, put snacks away so they’re not in your face, don’t bring candy into the house and you won’t eat it, leave out a fruit bowl with fruit in it to eat more fruit. He dresses them up with a veneer of science but they’re just illustrations or anecdotes for the homilies he’s really delivering. This bothers me less than stuff dressed up in p-values and significant testing but which isn’t significant and has no significance if you cock your head a bit.

    • shravan says:

      This is a fascinating summary of Wansink-think.

    • Andrew says:


      I discussed some of those points here. One of the problems here is that Wansink and others (including government officials!) might well generalize from his apparent success with common-sense advice, to get his opinion on non-common-sense issues. Wansink’s success using pseudo-science to support common sense then gives him the credibility to promote corn syrup or whatever else he happens to feel like promoting. In short, if Wansink only were promoting unremarkable common sense, it’s no big deal, but when he moves to muddier waters, he could well be leading people in wrong directions. Also all the overstated effect sizes are a bunch of hype which could then lead to disappointment, etc.

  7. strangetruther says:

    Your situation with the sub-standard psychology lot and my situation with the sub-standard palaeontology lot, both remind me of a TV film about the Amazon jungle years ago, where someone fatally shot an agouti with a bow and arrow, but it still just stood there, at the water’s edge. Finally it just needed a sharp tap on the top of the head with the blunt edge of a machete.

    Many such institutions are zombies, still standing only by dint of inertia, years of caked paint and rust, and fear, often by everyone else, of change.

    Maybe a suitable coup do grace would involve the firm attachment of shame to anyone endorsing or conniving with the problem institution, and also the offering of a comfortable clearly adequate alternative… with the initial steps clearly marked out. And some really nice slogans.

  8. Brittany Alexander says:

    I wonder if there is some way to change the human research review process to require a statistician to overview all the designs of studies and experiments. With Wansink it takes about two seconds to decide the study is bad. Argue that bad statistical analysis can cause harm and hence should not be tolerated in academic research. Stop the study before it starts.

  9. Anonymous says:

    Brittany Alexander: There aren’t enough statisticians to go around. Not to mention that many would not be interested in doing this, unless it paid well or unless they got teaching release or service credit or some such.

    • Martha (Smith) says:

      Agreed (which is not to say that Brittany’s suggestion is not a good one which in an ideal world would be implemented.)

      Some additional complications I’ve encountered:

      1. I once read an paper in an open-source journal which asks reviewers to check a box as to whether or not the paper needs to be checked by a statistician (and publishes the reviewers response online with the article.) Reviewers of the paper in question checked the “does not need to be seen by a statistician” option. Yet my reading of the paper was that there was a statistical problem involving the combination of design and analysis of the experiment.

      2. Universities may have a “statistical consultation” program which employs people who got their statistical training in a psychology department and picked up misconceptions that they then apply in their consulting work. (However, my experience when pointing out such misconceptions is fairly good — they may be taken aback at first that what I say is contrary to what they were taught, but do make the effort to understand the problem.)

Leave a Reply