Pizzagate update! Response from the Cornell University Media Relations Office

[cat picture]

Hey! A few days ago I received an email from the Cornell University Media Relations Office. As I reported in this space, I responded as follows:

Dear Cornell University Media Relations Office:

Thank you for pointing me to these two statements. Unfortunately I fear that you are minimizing the problem.

You write, “while numerous instances of inappropriate data handling and statistical analysis in four published papers were alleged, such errors did not constitute scientific misconduct (https://grants.nih.gov/grants/research_integrity/research_misconduct.htm). However, given the number of errors cited and their repeated nature, we established a process in which Professor Wansink would engage external statistical experts to validate his review and reanalysis of the papers and attendant published errata. . . . Since the original critique of Professor Wansink’s articles, additional instances of self-duplication have come to light. Professor Wansink has acknowledged the repeated use of identical language and in some cases dual publication of materials.”

But there are many, many more problems in Wansink’s published work, beyond those 4 initially-noticed papers and beyond self-duplication.

Your NIH link above defines research misconduct as “fabrication, falsification and plagiarism, and does not include honest error or differences of opinion. . .” and defines falsification as “Manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.”

This phrase, “changing or omitting data or results such that the research is not accurately represented in the research record,” is an apt description of much of Wansink’s work, going far beyond those four particular papers that got the ball rolling, and far beyond duplication of materials. For a thorough review, see this recent post by Tim van der Zee, who points to 37 papers by Wansink, many of which have serious data problems: http://www.timvanderzee.com/the-wansink-dossier-an-overview/

And all this doesn’t even get to criticism of Wansink having openly employed a hypotheses-after-results-are-known methodology which leaves his statistics meaningless, even setting aside data errors.

There’s also Wansink’s statement which refers to “the great work of the Food and Brand Lab,” which is an odd phrase to use to describe a group that has published papers with hundreds of errors and major massive data inconsistencies that represent, at worst, fraud, and, at best, some of the sloppiest empirical work—published or unpublished—that I have ever seen. In either case, I consider this pattern of errors to represent research misconduct.

I understand that it’s natural to think that nothing can every be proven, Rashomon and all that. But in this case the evidence for research misconduct is all out in the open, in dozens of published papers.

I have no personal stake in this matter and I have no plans to file any sort of formal complaint. But as a scientist, this bothers me: Wansink’s misconduct, his continuing attempt to minimize it, and this occurring at a major university.

Yours,
Andrew Gelman

Let me emphasize at this point that the Cornell University Media Relations Office has no obligation to respond to me. They’re already pretty busy, what with all the Fox News crews coming on campus, not to mention the various career-capping studies that happen to come through. Just cos the Cornell University Media Relations Office sent me an email, this implies no obligation on their part to reply to my response.

Anyway, that all said, I thought you might be interested in what the Cornell University Media Relations Office had to say.

So, below, here is their response, in its entirety:

 

 

 

25 thoughts on “Pizzagate update! Response from the Cornell University Media Relations Office

  1. I actually did get a response when I replied, albeit an automatic reply:

    “Friends,

    I am out of the office and on the road today (April 5, 2017) with limited access to email. If you have an urgent Media Relations need, please contact my colleague Melissa Osgood at …”

    It’s a little convenient for them to send out a mass email and be out office so they can’t reply, but at least I’m considered a friend of Cornell!

  2. It’s not April 1 anymore, so I’m not sure what to make out of this. Either Andrew forgot to put the rest of the story, or this IS the story. I am assuming the latter, but if that is the case, what is the point of this post? Believe me, I empathize with the frustration about getting no response when I think there should be, but at this point this seems unnecessary. We already know that Cornell media relations just wishes this would all go away. So, isn’t a lack of response just more of the same?

    If you want a new story about lack of response, here’s one. I have some health issues that are personal but also professional. It concerns a highly publicized study, the ProtecT (http://www.nejm.org/doi/full/10.1056/NEJMoa1606220#t=article) study that has some new finding about prostate cancer treatment. The study randomized people into 3 groups – one surgical, one radiation therapy, and one active monitoring. There are very few deaths and the study did not find a significant difference between the 3 groups. However, buried in a supplementary table was the fact that most of the people randomized into the more aggressive treatment actually denied the treatment and resorted to active monitoring (the treatments are not fun). If they were analyzed according to the treatment they actually received rather than the group they were randomized into, then the results are somewhat different.

    I applied to get the data – to their credit, there was an online form to request the data from the NHS in the UK where the study was done. They denied my request, stating

    “I have been in contact with the ProtecT study’s principal investigators and unfortunately we are unable to comply with your request because our own research team are in the process of analysising this data.”

    It is certainly their right to deny my request, but the reason seems unsatisfactory to me. The data is a public good (in economic terms, not legal terms). Giving me access to the data in no way prevents their analysis of the same data. So, I appealed the decision. They said they would send my appeal to the study’s principal investigators. Their complete response follows:

    • The data appear to be publicly funded, so it is within your rights to press them further with your request. There may be an embargo period, as well as restricted access considerations, but I don’t see why you couldn’t review an anonymized dataset to check their work.

    • The times might be changing but not fast enough https://www.statnews.com/2017/04/04/clinical-trial-data-sharing/

      “Patients, meanwhile, appeared baffled at the squabbling scientists.

      “When I came here, I had no idea there was any controversy about sharing the data,” said Moses Taylor, who spoke on a patients’ panel Tuesday, and was also a participant in a National Institutes of Health-funded trial on blood pressure treatment. “I also did not realize that people that start these trials and carry them through, the only way they get recognized is publications and that publications determine their career.”

      Moses said that it’s up for the scientists in the room to figure out a way to make the data available. The patients agreed that the data should be shared: early, often, and responsibly.”

      • That story was from the recent competition held by the New England Journal of Medicine, followed by a two day conference and webinar about data sharing. I attended many of the online sessions and the disparity between patients (participants in clinical trials) and the trialists was striking. Patients just assumed the data was being shared – and that was part of their motivation (only part) in participating. The medical establishment was far more wary – for some legitimate reasons, but mostly in protection of their career paths. The impediments to data sharing are mostly human-created. In my view, they are mostly counterproductive, though there are some strong voices that believe clinical data must be kept out of public hands for a variety of reasons.

    • Dale:

      This post is the story. And, yeah, if they’re gonna send me their first email, I do think they should respond to my reply. They’re not obliged to, but they’d be doing their job better if they were to consider the possibility that there’s more to the story than they wanted to admit.

  3. Wansink’s work is not only sloppy by scientific standards but by its nature is a series of small n trials in which a context is tweaked to see if that changes the behavior of the subjects. The results are thus highly context sensitive and from them he pulls lessons or morals which rely flimsily on these low powered, noisy trials. Example: he tested stale popcorn at the movies and found that people will eat more stale popcorn when given large containers than small and from that he draws a lesson that large containers lead you to eat more, even if what you’re eating absolutely sucks. This fits what we know from many sources and thus plays to our priors, as in the food industry tests not to find what people like the most but rather for what people will eat more of and that turns out to be something less than most flavorful because they believe people feel more satifisfied faster when food is more flavorful. All of his work fits into this exact category: work that confirms what we already know but done in a reasonably clever manner. This means he’s working from a widely shared model. He may even be falsifying his data for all I know because the model is that widely shared: thin people don’t put as much on their plates, don’t buy Big Gulps at 7/11, don’t start at the dessert end of a buffet. We could easily pretend to have a room where we cleared the plates more often and handed out fresh ones to see if that increased consumption – and I can imagine this being yes or no depending on the context, like more chicken wings when there’s a pile of them near you sure but you clear that main course dinner plate from the table and most people simply wait for dessert (and say, but I wasn’t done!).

    Wansink’s books are really compendiums of sensible advice: use a smaller plate, put snacks away so they’re not in your face, don’t bring candy into the house and you won’t eat it, leave out a fruit bowl with fruit in it to eat more fruit. He dresses them up with a veneer of science but they’re just illustrations or anecdotes for the homilies he’s really delivering. This bothers me less than stuff dressed up in p-values and significant testing but which isn’t significant and has no significance if you cock your head a bit.

    • Jonathan:

      I discussed some of those points here. One of the problems here is that Wansink and others (including government officials!) might well generalize from his apparent success with common-sense advice, to get his opinion on non-common-sense issues. Wansink’s success using pseudo-science to support common sense then gives him the credibility to promote corn syrup or whatever else he happens to feel like promoting. In short, if Wansink only were promoting unremarkable common sense, it’s no big deal, but when he moves to muddier waters, he could well be leading people in wrong directions. Also all the overstated effect sizes are a bunch of hype which could then lead to disappointment, etc.

  4. Your situation with the sub-standard psychology lot and my situation with the sub-standard palaeontology lot, both remind me of a TV film about the Amazon jungle years ago, where someone fatally shot an agouti with a bow and arrow, but it still just stood there, at the water’s edge. Finally it just needed a sharp tap on the top of the head with the blunt edge of a machete.

    Many such institutions are zombies, still standing only by dint of inertia, years of caked paint and rust, and fear, often by everyone else, of change.

    Maybe a suitable coup do grace would involve the firm attachment of shame to anyone endorsing or conniving with the problem institution, and also the offering of a comfortable clearly adequate alternative… with the initial steps clearly marked out. And some really nice slogans.

  5. I wonder if there is some way to change the human research review process to require a statistician to overview all the designs of studies and experiments. With Wansink it takes about two seconds to decide the study is bad. Argue that bad statistical analysis can cause harm and hence should not be tolerated in academic research. Stop the study before it starts.

  6. Brittany Alexander: There aren’t enough statisticians to go around. Not to mention that many would not be interested in doing this, unless it paid well or unless they got teaching release or service credit or some such.

    • Agreed (which is not to say that Brittany’s suggestion is not a good one which in an ideal world would be implemented.)

      Some additional complications I’ve encountered:

      1. I once read an paper in an open-source journal which asks reviewers to check a box as to whether or not the paper needs to be checked by a statistician (and publishes the reviewers response online with the article.) Reviewers of the paper in question checked the “does not need to be seen by a statistician” option. Yet my reading of the paper was that there was a statistical problem involving the combination of design and analysis of the experiment.

      2. Universities may have a “statistical consultation” program which employs people who got their statistical training in a psychology department and picked up misconceptions that they then apply in their consulting work. (However, my experience when pointing out such misconceptions is fairly good — they may be taken aback at first that what I say is contrary to what they were taught, but do make the effort to understand the problem.)

  7. I just sent Cornell this email:

    “I know you have been following this issue, and I thought you might be interested in new information posted today on PeerJ Preprints: https://peerj.com/preprints/3025/

    On the one hand, I can see how continuing to focus on Wansink’s work might be seen as bullying, but then again what was the point in them releasing the pizza data if no one was going to look at it? And I still get contacted by journalists about this story. Wouldn’t it look bad if we posted a criticism of someone’s work and requested a data set and then didn’t take the time to look at the data once it got released?

    • Jordan:

      The whole “bullying” thing is interesting. Here’s a definition I found: “bully: use superior strength or influence to intimidate (someone), typically to force him or her to do what one wants.”

      The funny thing is, this definition is value neutral. For example, if the cops use their superior strength to intimidate someone to stop him from mugging someone, that’s “bullying” under the above definition. Or if the scientific community uses its superior influence to intimidate Wansink to stop him from misrepresenting his data, that’s “bullying” under the above definition.

      This has left me unsatisfied. Until I have a better definition of “bullying,” I feel like it’s hard to even discuss the topic.

      P.S. I think you and your colleagues have done a valuable service with this work, and I find Wansink’s behavior from beginning to end to be contrary to the principles of scholarship and scientific learning.

      • I know you aren’t on Twitter, but it looks like there is a conference on research integrity going on where someone from F1000Research is classifying scientific criticisms at PubPeer as bullying, and this bullying should be disciplined like we discipline other scientific misconduct:
        https://twitter.com/RetractionWatch/status/894578556231008260

        How about we first worry about disciplining people who are fabricating/falsifying results, then we can get to those mean, vicious bullies who are pointing out numbers don’t add up.

        P.S. I’m not at the conference, nor do I see a streamed version of the talk, so this characterization of the slide could be a misrepresentation. However, I vehemently disagree with other points in the slide, such as removing anonymity, and that researchers’ careers are being destroyed by some random comment on the internet. Look at Wansink, he’s still getting inviting to conferences and we’ve been doing our very best to bully him. Maybe we just suck at bullying…

        • Jordan:

          I think it’s ok for journal reviews to be anonymous, but I think they should be made public. Every journal should just immediately publish every review they get. This could have the positive effect of putting journal editors on the spot, if they accept a crap paper that received negative reviews, just because of personal connections or a desire to get something newsworthy into the journal.

          Also, I know you’re joking but just let me clarify that I don’t think you or I are “bullying” Wansink at all. Rather, we’re pointing out errors in his published work. When people point out errors in my published work—even if they do so rudely—I thank them, as I appreciate the feedback and my goal is to work toward the truth.

        • Yes, I feel like we followed best practices with the Wansink case–we tried contacting the authors, only posted the first preprint after they stopped responding to us, and only started blogging about the case when the lab would not respond to the preprint.

          However, an outsider could easily view this as a classic example of bullying. For example, I’ve kept track of 60 links here: https://peerj.com/preprints/2748/#links, and I eventually gave up keeping track of them.

          I’m just trying to understand what bullying/harassment these people are constantly referring to. If it’s not our pizzagate investigation then what is it? Is it the 600 mentions of Amy Cuddy?

          I just want to know what it is. And so do my colleagues: https://twitter.com/jamesheathers/status/894636213960921088

          I do not support baseless online attacks, which I made clear in this post: https://medium.com/@OmnesRes/crap-spotted-at-biorxiv-15eecd58be6f

        • Jordan:

          At this point I’m kind of stunned that Wansink still has a job. Harvard kicked out Marc Hauser for a lot less, and Wansink’s embarrassing the hell out of Cornell. It did take Harvard awhile to get rid of Hauser—they needed a formal investigation—but still. The cases seem pretty comparable, with the only difference being that Wansink left more evidence lying around that his papers were not reporting actual data summaries.

        • Yeah, during our investigation I kept thinking we had found the “smoking gun”.

          Nick found that Wansink had published the same paper in 2 different journals, and the same book chapter in 2 different books.
          –I guess Cornell didn’t find that bothersome, although a grad student would probably be expelled for misappropriating a couple sentences.

          Nick found the same results reported in two different papers, describing two different samples, which is mathematically impossible.
          –I guess Cornell buys Wansink’s explanation that one study built upon the other and just happened to get the exact same results.

          Nick found a bunch of different surveys that all have 770 responses.
          –Haven’t seen anyone able to explain this, but I’m open to hearing an explanation.

          Wansink is just like Trump. Every time you thought Trump’s run for president was finished he would just do something even more ridiculous to make you forget about what you had previously been upset about.

          At this point it’s unclear what would even be surprising in the case of Wansink. I think Nick Brown put it nicely here http://steamtraen.blogspot.com/2017/04/the-final-maybe-two-articles-from-food.html:

          ”’Is this interesting? Well, less than six months ago it would have been major news. But so, today so much has changed that I don’t expect many people to want to read a story saying “Cornell professor blatantly recycled sole-authored empirical article”, just as you can’t get many people to click on “President of the United States says something really weird”. Even so, I think this is important. It shows, as did James Heathers’ post from a couple of weeks ago, that the same problems we’ve been finding in the output of the Cornell Food and Brand Lab go back more than 20 years, past the period when that lab was headquartered at UIUC (1997–2005), through its brief period at Penn (1995–1997), to Dr. Wansink’s time at Dartmouth.”’

          The interesting thing is we haven’t even looked through every single paper by Wansink. If someone did a Stapel-style report I can only imagine how thick that report would be.

Leave a Reply to Jordan Anaya Cancel reply

Your email address will not be published. Required fields are marked *