Clarke’s Law: Any sufficiently crappy research is indistinguishable from fraud (Pizzagate edition)

[cat picture]

This recent Pizzagate post by Nick Brown reminds me of our discussion of Clarke’s Law last year.

P.S. I watched a couple more episodes of Game of Thrones on the plane the other day. It was pretty good! And so I continue to think this watching GoT is more valuable than writing error-ridden papers such as “Lower Buffet Prices Lead to Less Taste Satisfaction.”

Indeed, I think that if people spent more time watching Game of Thrones and less time chopping up their data and publishing their statistically significant noise in the Journal of Sensory Studies, PPNAS, etc., the world would be a better place.

So, if you’re working in a lab and your boss asks you to take a failed study and get it published four times as if it were a success, my advice to you: Spend more time on Facebook, Twitter, Game of Thrones, Starbucks, spinning class. Most of us will never remember what we read or posted on Twitter or Facebook yesterday. But if you publish four papers with 150 errors, people will remember that forever.

P.P.S. From Clifford Anderson-Bergman, here’s another person who should’ve spent less time in the lab and more time watching TV. Key quote: “In court documents, prosecutors noted that Kinion set out to win prestige and advance his career rather than enriching himself.”

96 thoughts on “Clarke’s Law: Any sufficiently crappy research is indistinguishable from fraud (Pizzagate edition)

  1. I would think that every political scientist would like Game of Thrones, at least the parts that aren’t all sword and sorcery. But I’m not sure it measures up to Deadwood (a whole show about state building!)

  2. “spinning class”

    At first, I read “spinning” as a verb (as in “spinning yarn”) not an adjective (“spinning wheel”). I want to invent the notion of “spinning class”. I don’t yet know what it is, but I think a lot of students who aren’t cutting class are “spinning” it.

    • Just FYI for non-americans or whatever “spinning class” almost always refers to a kind of stationary bicycle exercise class, not creating yarn from wool (though that is a legit use of “spinning class” it’s just way less common).

    • Jordan Anaya: The “hehe” at the end of your posting makes me doubt the claim (quoted by Martha (Smith) above) that Anaya, Brown, and van der Zee are not driven by animus toward Wansink.

        • Jordan Anaya: Over the years, I have several times reported duplicate publication. The usual response is no response at all,
          especially if one of the authors is A Big Name. I’ve also gotten “So we see, but we don’t care” responses. Once the response was a corrigendum; the editor originally mentioned retraction to me , but then, after contacting the authors, decided on a corrigendum. FWIW.

        • Forgot to mention the time a major psychology journal was advertising that if you attended their conference and had your paper published in the conference proceedings, you could also publish it in another journal. When I pointed out that this is seen as unethical and violates COPE guidelines, I was told that if they didn’t allow this, no one would want to publish their work in the conference proceedings.

        • I think these days one can just point out the problems on Pubmed Commons and they will be connected to the paper forever regardless of the Journal/Authors cooperation.

  3. This paper (and almost everything else touched by Wansink) seems to have issues, but I think Nick is taking it too far when he tries to argue that the paper implies “20% of the US soldiers who saw combat in World War 2 were women”. Even Nick himself acknowledges that differences in survival could account for the gender ratio, and he dismisses it very offhandedly, even though it is extremely plausible. (The age distribution….not so much). Let’s not overplay our hand over here and end up just as bad as Wansink.

    • Actually I think my tongue-in-check claim stands up quite well.

      Some women were killed by enemy action in WW2. According to this Wikipedia page https://en.wikipedia.org/wiki/American_women_in_World_War_II “more than 460 — some sources say the figure is closer to 543 — lost their lives as a result of the war, including 16 from enemy fire”. (I presume the rest were killed in various forms of accident. I read once that more US troops were killed in accidents during the five months leading up to Desert Storm than in the action itself.) Compare those 16 (or even the 543, if you want) with the more than 400,000 soldiers killed in action (http://www.nationalww2museum.org/learn/education/for-students/ww2-history/ww2-by-the-numbers/us-military.html).

      First, the official number of US soldiers who saw “combat” — defined as having basic training, picking up a rifle, and going out to face the enemy — in WW2, and who were women, is essentially *zero*. Zero women undertook basic training, unless they disguised themselves as men (including at the medical examination). Women were not deployed near the front lines. They were not present on warships. Maybe on a couple of occasions a field hospital got overrun or something, but compared to the hundreds of thousands of male soldiers killed, this is not going to show up in any sort of random survey.

      Second, women only made up around 3% of military personnel (source: http://www.nationalww2museum.org/learn/education/for-students/ww2-history/ww2-by-the-numbers/us-military.html). So even if the survey had asked people only for the length of their military service, and nothing at all about combat, the postwar(*) survival rate of women would have to have been 7 times as high as for the men in order for them to be part of this survey in these proportions. But these veterans are 75, not 105. Yes, women live longer than men. Do we observe 7 times as many women as men in this age group? We don’t; it’s more like 1.2x more at ages 65-74 and 1.5x more at ages 75-84 (source: https://www.ncbi.nlm.nih.gov/pubmed/15692280). And again, the actual number of women who could claim to have been involved in “repeated, heavy combat” is actually close to zero. But OK, maybe I should have said that these figures imply that 15% of the US soldiers who saw heavy combat in WW2 were women. I’m not sure if that makes a whole lot of difference.

      Let’s be realistic here. That figure of 20% is obviously wrong, and wildly so. As a minimum, the people doing the study ought to have been amazed by such a number, if they had been paying attention. I was only half-joking when I asked why there were no movies about women in combat. Does anyone really think that if there was even one case, someone wouldn’t have wanted to tell the story?

      (*) The same source https://www.ncbi.nlm.nih.gov/pubmed/15692280 shows that only just over 1% of soldiers were killed on duty. So almost all attrition would have to be postwar.

    • uhhugsg: I pointed out another possibility to Nick Brown (over private e-mail) which is that 80% being men does not imply that 20% are women, if there is a fair number of veterans for whom sex is given as “missing” in the data file. For example, there could be 80%, 2% women, and 18% missing. When computing percentages like this, some people base the percentage on nonmissing cases, others base it on all cases. SAS allows you to choose which option you want.

      But given Wansink’s (or his data analysts’) general sloppiness, my guess is that this 20% that are not men is being caused by several different factors, not just one.

      Sometimes that reason for data anomalies is not apparent until the data are examined. I remember once finding a strange variable distribution in a dataset collected by other persons. The dataset contained variables from many different schools. Some of the
      schools had the variable coded in an ascending manner, others had it in a descending manner, and no one involved in collecting the dataset had noticed. I knew something was wrong as soon as I saw the distribution but the reason why didn’t occur to me until I saw the data.

      • Smut Clyde: Thank you. I haven’t seen the other papers. I was speaking from forty years experience looking at other people’s datasets. Missing values for a variable can be handled in a variety of ways.

        • That whole Veteran Survey sub-oeuvre is worth a look in its entirety. The number of survey forms sent out varies from 500 (with 250 in a second mail-out that included older veterans), to 5000, to 5000 + 2500. I am inclined to shrug off the order-of-magnitude variation as a mistake by some underling who assembled the paper, followed by a lack of proofreading.

          What alarmed me more is that details of the veterans to target (names, ages, addresses) were apparently obtained from census reports. This was a greater level of specificity in a census report than I am used to.

        • Smut Clyde: I think the name, address, and age must have come from the U of I Veteran Survey itself. The CEP article states that the respondent address from the U of I Veteran Survey was used to obtain the median income level and median home value for the neighborhood of each respondent using the address zip code and census tract number. These were used as proxies for wealth and income for each respondent. There is a useful list of variables at the end of the CEP article.

        • But see Footnote 2: “2. To solicit respondents, a random sample of veteran
          addresses was obtained from census data.”

        • Smut Clyde: Thank you. You’re right! But that is very strange because by USA law, census records about individuals become available 72 years after the census (so the 1940 census records become available in 2012). They must have gotten a special arrangement.

        • This would involve access to contemporary data. We are to believe that if you fill in a US census form, you open yourself to the risk that the Census Dept. will pass your personal details on to any numptie who claims to be running a survey.

          Is this usual? Would there be a public record of the application for the details? Does anyone else have access to the same information?

        • Smut Clyde: A little sleuthing reveals that “qualified researchers with approved projects” can get access to some census microdata.

        • But that’s demographic information, raw material for statistical analysis. Wansink is claiming that the Census Department sold him a targetted mailing list.

        • Smut Clyde: The only Wansink veterans survey article that I have been able to access thus far is the CEP (2012) one
          by Bogan, Just, and Wansink. That article does not state that the US Census Bureau sold him a targeted mailing list.
          Which article does? I will be able to get it through one of the UIUC libraries when I go over there.

        • That is my paraphrase. Wansink purportedly had a list of names, current addresses and phone numbers for a specified target group (war veterans over a specified age), obtained in some way from Census Department data.

        • Smut Clyde: Something that puzzles me: Who paid for this “University of Illinois Veterans Survey”? I don’t see a funding agency listed on any of the articles that I have read thus far. It must have been expensive to print a 16-page survey, mail it, and contribute a small amount of money for each one returned. And I assume that the person who sorted, coded, and entered the data from the surveys was paid (Marjan van Ittersum, probably the wife of Koert van Ittersum, one of the co-authors). But there is nothing on Wansink’s CV indicating grant funding for this survey. Possibly collection of these data was done under the auspices of one of the research institutes on the UIUC (University of Illinois at Urbana-Champaign) campus but thus far I have not seen any indication of that.

        • Smut Clyde: I did a little sleuthing into the 2000 USA Census. There are two forms, a short form and a long form. About 16% of the population received a long form. The long form contains not only name, address, telephone number, age, etc., but also a three-part question (question #20) about military service. The first part asks current or past service in the USA Armed Forces, National Guard, or Reserves. The second part asks when the person served, with 9 options, one of which is World War II (September 1940 to July 1947). The third part asks the length of active-duty military service (LT 2 years; GE 2 years).

          In the 2000 Census, the median age of World War II veterans was 76.7 years. Women made up 4.2% of the World War II veterans. (Of course, women live longer than men do.)

          I think the last combat of World War II was in 1945.

          The 2000 Census was to be filled out with information as of April 1, 2000.

        • Smut Clyde: Forgot to mention that the USA entered World War II (declared war) on December 8, 1941, after the bombing of Pearl Harbor the day before. (Apologies if I am telling you things that you already know, but your use of the word “numptie” suggests to me that you are not an American.)

      • Smut Clyde: I wonder if I could track down this University of Illinois survey of veterans. I am at this very moment working on a computer at the University of Illinois.

        • In the 2012 report, 750 veterans were targetted, resulting in 467 responses:
          http://bogan.dyson.cornell.edu/doc/research/COEP.pdf
          500 for WWII only, and then a second wave of 250 including younger veterans.

          In the 2009 report, 5000 WWII veterans were targetted, of which 2376 forms were deliverable, resulting in 493 responses:
          https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2474303

          In the 2008 Heroism report and 2016 Frontiers paper there were 7500 veterans, all WWII.

        • I wouldn’t hold your breath.

          Does anyone else find it unusual that the survey took place in 2000, but the first paper wasn’t published until 2008, and there are now 9 papers/book chapters from the lab based on this data?

        • My curiosity is compounded by the fact that Kniffin, Wansink and Shimizu (2014) refer us to Wansink (2002) for details:
          “the 2000 University of Illinois Veterans Survey (Bogan, Just, & Wansink, 2013; Wansink, 2002; Wansink, Payne, & van Ittersum, 2008; Wansink, van Ittersum, & Werle, 2009)”

          In fact Wansink (2002) says nothing about the Veterans Survey, and merely laments the information that was lost and the opportunities that were missed when an exciting behaviour-modification program came to an untimely end:
          “Efforts to fully implement a wide-scale program to promote organ meat consumption were interrupted by the end of World War II …”
          http://foodpsychology.cornell.edu/sites/default/files/unmanaged_files/Changing%20Eating%20Habits%20Article.pdf

          So in 2002, Wansink was wistfully speculating about the need for retrospective dietary research, which he later discovered that he had conducted in 2000.

          The 2002 paper is also noteworthy for its tendentious attempt to retcon Kurt Lewin into some kind of proto-Behaviorist, but that’s by the way.

        • I’m also getting stuck in some interesting citation loops for a different set of papers. I think I might have to turn it into a blog post and run it by my collaborators.

        • More maddening discrepancies:
          In the 2008 and 2016 accounts, 7500 survey packs were sent out; 4322 were deliverable and 3188 were not deliverable due to the addressee being dead.
          Of 4322 delivered surveys, 1123 were returned, 23.6%, although Wansink et al. argue that this is really equivalent to a 43% return rate, by assuming that about half the recipients of delivered surveys were also dead, or incapacitated. Whatever.
          17 of those surveys excluded. Of the remainder, “120 of them had experienced a light combat and 235 of them had experienced heavy combat” (2016), or 526 ” had experienced heavy and frequent combat” (2008).

          So here are Kniffin, Wansink and Shimizu (2014), who had 931 returned surveys:
          “As reported by Wansink et al. (2008), 66.7% of deliverable questionnaires were completed by the original sample.”

          Do these people even read their own feckin’ papers?!

          Kniffen et al. go on to explain that they excluded 394 responses for being the wrong age or gender — e.g. gender not marked; widows had filled out their late husbands’ forms; age not marked. Why they would want to ask “age” is anyone’s guess, as the targetting of the surveys was posited on knowing the recipients’ birthdays in advance. Leaving 537 surveys.

        • If you want to help out you can email some of the journal editors about these problems.

          I have already emailed Frontiers in Psychology regarding “How Traumatic Violence Permanently Changes Shopping Behavior” at this address:
          [email protected]

          I have not received a response.

          I’ve sent out quite a few emails about other papers (I’m planning on blogging about how they respond), but I don’t think I’ve emailed any editors about any of the other papers that are based on this veteran’s survey.

        • I think contacting the journals that published the papers is important — they need to be held accountable for publishing sloppy research (as do the organizations that gave grants to Wansink after there was evidence that his work had a lot of mistakes.

        • Martha:

          >>as do the organizations that gave grants to Wansink after there was evidence
          >>that his work had a lot of mistakes.

          It’s not clear where that starts. I collected a few links that I have found to people critiquing Wansink’s work before December 2016. But should anyone in a funding agency have noticed these? Is it part of due diligence to Google every possible bit of scuttlebutt about someone who is applying for funding?

          July 2010: “Food for Thought” http://www.artnews.com/2010/07/01/food-for-thought/

          April 2014: “Debunking a shoddy research study” http://www.lhup.edu/~dsimanek/pseudo/cartoon_eyes.htm

          April 2014: “Does Banning Chocolate Milk in School Really Backfire?!?” http://school-bites.com/chocolate-milk-bans-in-schools-study/

          July 2014: “Moms, “Food Fears” and the Power of the Internet” http://www.thelunchtray.com/moms-food-fears-internet-wansink/

          November 2015: “Fast food, fast publication” http://www.win-vector.com/blog/2015/11/fast-food-fast-publication/

          February 2016: “The strange story of my accepted but then unpublished commentary on a Disney-sponsored study” (more of a comment on publication ethics than actual research): http://www.foodpolitics.com/2016/02/the-strange-story-of-my-accepted-but-then-unpublished-commentary-on-a-disney-sponsored-study/

          May 2016: “Everydata” (a book): https://books.google.fr/books?id=XNJCDQAAQBAJ&pg=PT166&lpg=PT166&dq=2000+university+of+illinois+veterans+survey&source=bl&ots=wUNogYlfVN&sig=mpRXiXrlyNMfKPoHJSiZBdkpREo&hl=en&sa=X&ved=0ahUKEwj0-eXgqtjSAhUDPRQKHTyGCnQQ6AEIMTAE#v=onepage&q=2000%20university%20of%20illinois%20veterans%20survey&f=false

          There is also “Science first, communication second. A prequel to the Food and Brand Lab episode”, posted in February 2017 but referring to an exchange between the author and Wansink’s lab in 2012: http://persuasivemark.blogspot.fr/2017/02/science-first-communication-second.html

        • NIck: ” But should anyone in a funding agency have noticed these? Is it part of due diligence to Google every possible bit of scuttlebutt about someone who is applying for funding?”

          Good point — I was erroneously assuming that the c2010 mistakes were more widely known than, on reflection, I now realize would be reasonable to assume.

          So I guess what is needed is a push to change the culture in the direction of making it the done thing to list critiques of one’s work in any vita, list of publications, list of references, etc. Admittedly not an easy task. But presumably journals, professional societies, and granting agencies could hep change the culture by promoting such practices (by professional societies), and requiring such referencing of critiques in grant applications and journal submissions. (Yes, I am an idealist. It will never happen 100%, but the effort could only be for the better.)

        • Martha Smith: Re funding agencies — There is an article on Retraction Watch (Alison McCook, February 24, 2017) describing how 17 researchers sanctioned for misconduct by the US Office of Research Integrity (ORI) later received more that $100 million in NIH grant funding.

        • Contradictions between papers are hard to pin down as the responsibility of an individual journal. The editors can argue that “Yes, the numbers provided in this paper contradict the numbers that the same author cited in that other paper in that other journal, but who is to say that our paper is the incorrect one?” The damage to the academic literature is distributed rather than localised

        • Jordan Anaya: Do we know that the survey itself took place in 2000? The official date of the 2000 USA Census was April 1, 2000. It must have taken the Census Bureau some time to put together all the information for other people to be able to access it. Also, it must have taken some time for the veteran surveys to be returned, sorted, coded, and entered into a database. Wansink moved from the U of I in 2005; he had a baby in 2005 and another one in 2007; and he was on leave from 2007 to 2009 working for the USDA. Not to mention the fact that he seems to have his fingers in a lot of pies. Everyone who has been in research for any length of time has backburnered projects. And then there is the time needed for manuscript submission, review, revision, and publication, including publication lag. This doesn’t seem unreasonable to me at all.

          I note that Koert van Ittersum (co-author) works at U Groningen (sp?), where Nick Brown is now a graduate student.

        • I’m not quite up to speed on your investigation of this veteran survey and it’s been a little while since I looked at the specific issues here. For my current investigation I’ve been carefully reading “Mindless Eating” and noticed there is some information about the survey there.

          He writes:
          “Billy was a World War II Navy cook, and we corre-
          sponded when our Lab was conducting a large-scale survey
          of how the war had changed the food habits of those in-
          volved in it.”(4)

          Reference 4 goes to:
          “In 2001, the Lab did a large-scale quantitative survey on how World War II influenced food habits of Americans who were involved in the war. Billy was one of the veterans who completed the survey, and he included this handwritten story. More on our World War II study can be found in Chapter 8.”

          He also writes:
          “So why did some veterans of the South Pacific learn to
          love Chinese food and others hated it—and still hate it 50
          years later? We surveyed 603 World War II veterans from
          the United States and focused on the 261 who had served
          with the Army, Navy, or Marine Corps in the South Pacific.
          During their tour, they would have eaten a number of
          Chinese-like cuisines. We asked them how often they ate
          Chinese food and how much they liked it 50 years after the
          war. We also asked them other questions about their experi-
          ences and attitudes.
          Forty-six percent of our Pacific veterans enjoyed Chinese
          food and still ate it with some frequency. But we could find
          no other characteristics they had in common. Before the war,
          some had lived in big cities, some on farms. Some had
          grown up with plenty of food, others had worried about
          food most of their childhood. Some had graduated from col-
          lege, others had never seen a ninth-grade classroom. What
          was the missing link that connected them?
          As we later discovered, the answer didn’t lie with the
          people who liked Chinese food. It emerged only when we an-
          alyzed the data about those soldiers who grew to hate Chinese
          food.
          The 31 percent of the Pacific veterans who hated Chinese
          food were also diverse in terms of where they came from and
          who they became. Almost all, however, shared one impor-
          tant characteristic. They had experienced frequent and heavy
          close-quarter combat in the South Pacific. As a result, the
          local foods they ate there brought up anxious and discom-
          forting feelings—even 50 years later.
          In contrast, when we went back to the profiles of those
          who liked Chinese food, we didn’t find any Marines who’d
          been at Iwo Jima or any infantry soldiers at Guadalcanal.
          What we found were mechanics, clerks, engineers, and truck
          drivers—enlisted men who did not experience the war from
          the front line. Although their wartime experience was a sac-
          rifice, they didn’t come home with terrible associations that
          tainted the taste of food, seemingly forever.”(11)

          Reference 11 goes to:
          “See Brian Wansink, Koert van Ittersum, and Carolina Werle,
          “How Combat Influences Unfamiliar Food Preferences: Do
          Marines Eat Japanese Food?”, under review.”

        • Jordan Anaya, Smut Clyde: I’m mostly responding to Smut Clyde’s questions about the 2000 US Census. I don’t work at the University of Illinois now, but I used to, so I have a lot of contacts there. On Monday I am going to see what I can track down about the “University of Illinois Veteran(s) Survey.”

        • Reference 4 goes to:
          “In 2001, the Lab did a large-scale quantitative survey on how World War II influenced food habits of Americans who were involved in the war.

          He also writes:
          “So why did some veterans of the South Pacific learn to love Chinese food and others hated it—and still hate it 50
          years later? WWe surveyed 603 World War II veterans from the United States and focused on the 261 who had served
          with the Army, Navy, or Marine Corps in the South Pacific…”
          Reference 11 goes to:

          Ref. 11 was eventually published as
          Wansink, van Itterswum & Welse (2009), “How negative experiences shape long-term food preferences”. It refers the reader back to Wansink (2006), “Mindless Eating” for supporting details.

          In the year 2000, each veteran was sent a survey, a cover letter, and a business reply return envelope (see Wansink, Payne, & van Ittersum, 2008). ..
          Of 2376 surveys that were deliverable, 493 veterans personally responded (20.7%).”
          (no breakdown into Pacific / European theatres).

          He really does seem to be making it up as he goes along. Is it that hard to look up your own published papers to check what you wrote last time?

          To sum up, either 7500 forms were sent out, of which 4322 were delivered and 1123 returned (23.6%); or 5000 were sent out, 2376 delivered, with 493 personal responses.

          One possibility is that a second wave of 2500 surveys was posted out out without the age restriction — as is suggested by the 2012 description of the survey that talks about 750 surveys, in two waves, only the first 500 targetted at WWII veterans, with 467 responses (about 2/3 being WWII). If that is correct, then analyses that specify 7500 surveys but treat all reponses as WWII (such as the 2008 and 2016 reports) — that is, studies that forgot to filter out the 1/3 of veterans from later wars — are garbage.

        • Smut:

          This all seems consistent with my current impression which is that the Wansink lab is a messy treehouse with a big bag on the floor overflowing with slips of paper: When anyone does a study, they write down the data and throw the numbers into the bag. Then, when anyone decides to write a paper, they reach into the bag and grab some data. The only rule is that Wansink’s name goes on every paper.

        • Smut Clyde: 5000 surveys or 7500 surveys or whatever…. This project, if really conducted, must have cost a chunk of change. Printing or photocopying a 16-page survey plus a cover letter and a return envelope, plus postage and return postage for the survey, plus the cost of the envelopes, plus the salary to pay someone to code the surveys and enter them into a database, and so on. I just don’t understand how this could have been done without some kind of funding. And yet nothing is mentioned that I have seen.

        • The discussion of the Food and Brand Lab’s output hasn’t looked at ecological validity much so far, but here goes anyway: How much “local” food did American soldiers deployed to the Pacific theatre (or anywhere else, for that matter) in WW2 actually eat? I would have thought that most of the time soldiers lived off GI rations (i.e., standard “American” food).

          It seems to me that this research makes a basic assumption (“Deployed to the Pacific” = “Ate enough Chinese food to uniquely associate it with trauma”), the evidence for which is not clear. I can certainly understand how one might make a case that people who had been taken prisoner by the Japanese might be less keen to order boiled rice after their liberation, but I don’t think I’ve seen data about the experience of PoWs anywhere in this body of work.

          I’m having a hard time imagining that once you have chased the last Japanese soldiers out of some island, and you then head off to a local food stand (if you can find one standing after the heavy combat and years of Japanese occupation) to unwind, why would you then find yourself traumatised by what you ate (but not by the contents of your ration pack)?

          I’m also slightly concerned by the way in which “a number of Chinese-like cuisines” implicitly becomes “Chinese food as served in Chinese restaurants in the post-war United States”. This seems to imply a very simplified view of “Chinese food”. (We learned from a 2015 Mother Jones interview with Dr. Wansink that although his wife is from Taiwan, he does not like Chinese food.)

          For what it’s worth, my experience of taking Chinese people (from China or Hong Kong) to Chinese restaurants in Western countries on a couple of occasions is that they look at the menu and then look at each other in a “What the heck is this?” kind of way when they see dishes like General Tso’s Chicken (apparently invented in New York in 1972, according to Wikipedia). Next they speak to the server in Cantonese or Mandarin, and after a few minutes, all kinds of “undocumented” dishes start to arrive. So any connection between what local food the US troops who liberated Guadalcanal or Guam may have eaten and what they would get at their local Shanghai Garden many years after the war ended might be rather tenuous.

        • Then, when anyone decides to write a paper, they reach into the bag and grab some data.

          Judging from Nick’s observations — values with last digits that are distributed more like made-up numbers than actual calculations, and the 17 of 18 results that replicated exactly across different samples of subjects — the data are optional.

        • Re Jordan’s rickrolls etc. link: When I see a citation for a specific fact that is to a book (no page or section number), I conclude from that alone that the author has no idea what citing a reference (or indeed, science) is all about. The devil is in the details — and the essence of reasoning/evidence is in the details! Wansink just doesn’t get it; it’s no use pointing out mistakes — he just doesn’t get the concept. Sad.

        • Martha: Sad indeed. Might as well just cite the internet or something. It is clear facts are just a nuisance to this lab, which until recently they were able to ignore. I wonder if they are aware of what they are doing or if they’ve done this so long that they actually believe what they are selling.

          Clyde: Yeah, I figured there would be some circular citations with how often he cites his book, which mostly just cites himself (incorrectly).

          Martha and Clyde: Citing a book also has another issue. Let’s say a scientist in your field wants to see what you’re talking about. Do they now have to go buy the book? I doubt the university library will have a copy on hand.

        • Martha:

          >>When I see a citation for a specific fact that is
          >>to a book (no page or section number), I conclude from
          >>that alone that the author has no idea what citing
          >>a reference (or indeed, science) is all about.

          I have dealt with copy editors who asked for page numbers to be *removed* from citations that do not directly quote text from the reference. That is, they interpret the rule that “If you quote text in a citation, you must provide a page number” to also imply “If you do not quote text in a citation, you must not provide a page number”.

          Also, when citing one’s forthcoming book, one might not yet know what page numbers to use.

          But of course, for a scholarly work such as a journal article to cite a mass-market book other than in the most general part of the introduction (and certainly to do so as the only citation of relevant data) is not an especially good practice.

          Jordan:

          >>Martha and Clyde: Citing a book also has another issue.
          >>Let’s say a scientist in your field wants to see what you’re
          >>talking about. Do they now have to go buy the book? I doubt
          >>the university library will have a copy on hand.

          My N=2 experience has been that university libraries do have copies of mass-market books by researchers. But as someone who is a long way from his university library, I sympathise with the general problem of getting hold of books. That said, maybe we have become somewhat spoiled by online journal subscriptions (and/or Sci-Hub): these days, an article is easy to get old of, whereas a book reference could mean extensive browsing through the Google Books preview and hoping the relevant pages are there. Thirty years ago, articles and books were equally difficult to get hold of. Relative progress in one area will always highlight lack of progress in another. :-)

        • Nick:”I have dealt with copy editors who asked for page numbers to be *removed* from citations that do not directly quote text from the reference. That is, they interpret the rule that “If you quote text in a citation, you must provide a page number” to also imply “If you do not quote text in a citation, you must not provide a page number”.”

          Aargh! One more type of copy editor sabotage.

          One example of “refer to the book only” that I will probably never forget: A biology paper that said something like “We used linear regression to obtain this result,” and gave a reference to Draper and Smith’s textbook on regression. (This also displays the lack of understanding that makes the garden of forking paths such a huge problem.)

        • I’m a quantitative psychologist but I read a lot of social psychology research articles. Bad references are very very very common in social psychology. Bad, in the sense of the work not saying what the author of the article claims it said. I think authors often rely on their memories, or on their office mates’ memories. (“Hey Joe,
          know of a good reference for XXX”?)

          Most university libraries in the USA will have a copy of a book written by a scientist but intended for a more general audience, or they can obtain it for you via Interlibrary Loan. Such books are useful for undergraduates doing term papers and so on. Sometimes Google Books or Amazon.com will allow you to see enough of a book that you can get the page numbers that you need for a reference without having to buy the book.

        • Smut Clyde: I picked up the Wansink and Wansink (2013) JOURNAL OF RELIGION AND HEALTH article today. A note says that the study of World War II veterans was personally funded. So now we know. 7500 surveys. That’s quite a lot of money. Suppose each survey cost $5 for printing, postage, materials, etc. That would be $37,500. Wow.

          Misuru Shimizu and Koert van Ittersum did the data analyses. Shimizu is the one who tangled with Tim Smits back in 2012.

        • Someone mentioned the unrealistic ages of the World War II veterans. The article that has n = 931 has an age range of 73 to 91 with a mean of 77.8. Subjects less than 18 years old were omitted.

        • Smut Clyde: My efforts today to locate the University of Illinois Veteran(s) Survey got me “not in our database,” “too old for us to have retained records,” “contact the PI,” and the like. In another comment, I mentioned that I have discovered that Wansink funded the survey personally, so there is probably no chance of obtaining a copy of the dataset from anyone but him.

        • Someone mentioned the unrealistic ages of the World War II veterans. The article that has n = 931 has an age range of 73 to 91 with a mean of 77.8.

          In the N=931 paper, Kniffin, Wansink and Shimizu go on to give a std.dev of 2.38 for the age (Table 2). That sounds reasonable.
          The age problem arose with another paper that cited similar means, but std.devs about four times as large, which Nick found could only be achieved by an extremely bimodal distribution with most subjects aged 73 and a small minority of 105-year-olds.

          My efforts today to locate the University of Illinois Veteran(s) Survey got me “not in our database,”

          So Wansink is naming it after the University — implying that it was conducted under their aegis, and claiming their imprimatur — when it would be more accurately called the Brian Wansink Veterans Survey.

        • Smut Clyde: Presumably Wansink had permission from someone at the University of Illinois at Urbana-Champaign to name it that. But who knows?

          Has anyone asked Wansink why there is a discrepancy in the numbers across articles?

        • Carol: A lot of people have been posting on PubPeer about these articles, and I haven’t seen an author’s response (I’m pretty sure the authors get emailed when a comment is posted).

          If you think you can get Cornell to reply to emails feel free to email them.

        • One possible interpretation of the numbers provided by Kniffin et al. is that they received 1525 responses, then “we discarded the responses of 29
          respondents who did not indicate their age or gender, 239 who were age 17 or younger in 1945 since they would not have completed high school before the end of World War II, and 326 respondents who were women (e.g., widows of the questionnaire’s intended recipients)” — leaving the 931 cases analysed in the paper.

          This interpretation would fit with other versions of the survey in which some of the forms were sent out to veterans in general (not just WWII), and with the 20% of female war veterans in the paper analysed by Nick (if the authors in that case forgot to exclude the surveys filled in posthumously by widows).

          It has the minor problem of finding 400 more returned forms than were recorded in other versions of the project.

        • A Google search for “University of Illinois Veterans Survey” (with the quotes) returns exactly 7 hits, one of which is to this page (!) with the rest being articles from the Food and Brand Lab. Of course, there might be other published articles or chapters using this dataset, but one would normally expect the official name of the survey to be used in those works.

          Dr. Wansink mentions something about funding in his 2006 book, Mindless Eating:

          “Some labs, like ours, have a policy of not working directly for food companies. This eliminates conflicts of interest, and enables us to immediately publish our results in scientific journals and to share them with health professionals, science writers, and consumers.” [Aside: Dr. Wansink’s CV mentions several grants from food companies in the 2010-2013 time frame, so apparently this policy may have changed since these words were written.] “But because all labs need money to buy food, pay graduate students, and keep the lights on, this also means we rely on grants and gifts. We’ve had pieces of projects funded by consumer organizations and by grants from the Illinois attorney general, National Institutes of Health, National Science Foundation, U.S. Department of Agriculture, Council for Agricultural Research, and the National Soybean Research Center. In most years this has worked well and has provided freedom and a sense that good things were happening. In other years, I’ve had to cover the deficit out of my own pocket. We do the research we think is most urgent and interesting, and then we try to find a way to pay for it.”

          $37,500 (or even a fraction of that amount) seems like quite a deficit for an associate professor — which Dr. Wansink’s CV says he was from 1997-2001 — to cover out of his own pocket. Perhaps it appeared especially urgent and interesting to collect those data, although once the responses were received there was apparently no great rush to obtain a return on the investment, if the first article was indeed not published until 2008.

        • A lot of people have been posting on PubPeer about these articles, and I haven’t seen an author’s response (I’m pretty sure the authors get emailed when a comment is posted).

          Wansink did respond to one query, explaining that a false declaration of Authors’ Contributions was the fault of some anonymous “person doing the formatting and submitting … We have contacted the journal asking to rerun the analyses along with publishing an erratum. At that time we will also make this change.”
          https://pubpeer.com/publications/92B836EDBA3F705300E46467F6E4F5#fb116407

          Perhaps the authors are all too busy to check details like “who contributed what”, so they have a secretary for that. Anyway, Wansink was following PubPeer, though he may have given up by now.

        • if the first article was indeed not published until 2008.

          At least one was at least under editorial review in time to be cited in “Mindless Eating” (2006), even if it did not finally appear in 2009, in time to cite “Mindless Eating”.

        • Nick: The article in THE LEADERSHIP QUARTERLY (2008, 19, 547-555) doesn’t use the name “University of Illinois Veterans Survey” although it seems to be based on the same data (7500 surveys mailed, 3188 undeliverable, etc.).
          The only hint of an association with the U of I is the acknowledgement of the help of two U of I graduate students.

        • Jordan Anaya: Oh, I think that it is quite unlikely that anyone at Cornell would answer my inquiries. On the other hand, nothing ventured, nothing gained.

        • The article in THE LEADERSHIP QUARTERLY (2008, 19, 547-555) doesn’t use the name “University of Illinois Veterans Survey”

          I had noticed that too — the implied University affiliation was a later acquisition.

        • >>At least one was at least under editorial review in time to be
          >>cited in “Mindless Eating” (2006), even if it did not finally appear
          >>in 2009, in time to cite “Mindless Eating”.

          Ah yes (I’m reading “Mindless Eating” but hadn’t got that far):

          “See Brian Wansink, Koert van Ittersum, and Carolina Werle, “How Combat Influences Unfamiliar Food Preferences: Do Marines Eat Japanese Food?”, under review.”

          So the title changed a bit. And, as documented elsewhere in this discussion, so did the sample size.

          There is also a certain amount of what one might call “artistic license” in the way the results are presented in the book.

          The Wansink, van Ittersum, and Werle (2009) article says (p. 751): “Combat experience and adventurousness
          explained 6.1% (p < .01) of the variance of Pacific veterans’ attitudes towards Chinese food".

          The book presents this result as follows (emphasis added):
          "The 31 percent of the Pacific veterans who hated Chinese food [this was 29.2% in the article] were also diverse in terms of where they came from and who they became. *Almost all*, however, shared one important characteristic. They had experienced frequent and heavy close-quarter combat in the South Pacific. As a result, the local foods they ate there brought up anxious and discomforting feelings—even 50 years later.
          In contrast, when we went back to the profiles of those who liked Chinese food, we didn’t find *any* Marines who’d been at Iwo Jima or any infantry soldiers at Guadalcanal. What we found were mechanics, clerks, engineers, and truck drivers—enlisted men who did not experience the war from the front line. Although their wartime experience was a sacrifice, they didn’t come home with terrible associations that tainted the taste of food, seemingly forever"

          Quite how such a huge contrast could only explain 6.1% of the variance is not clear. Mean rating of Chinese food were 5.37 in the low combat group and 4.22 in the high combat group, with "disliking" being a rating of 1-3. Clearly, quite a few people in the high combat group must have liked Chinese food. We can calculate a rough value from the average SD of the two groups, knowing that the sample size was 493, and the high/low combat split was at the midpoint (6.1 out of 9, which incidentally means that a lot of the people in the "low combat" group saw a fair amount of action for clerks and mechanics), so there were about 246 people in each group. To get an F of 8.439 from those reported means implies a pooled SD of 4.39, which is a little awkward since the largest possible SD for a 1-9 scale is 4.00 (plus perhaps .01 for the N-1 in the denominator of the SD formula).

          Also, pp. 120-122 of Mindless Eating tell the story of Billy, a cook in the US Navy. Footnote 4 on Billy's story leads to this: "In 2001 [sic], the Lab did a large-scale quantitative survey on how World War II influenced food habits of Americans who were involved in the war". That's interesting for a couple of reasons: the date (2001 versus 2000), and the apparently specific statement that the survey was about how the war affected food habits. One is used to research articles omitting to mention that many variables were measured and then not reported, but if this study had collected other data that were later published under the "University of Illinois Veterans Survey" banner, one would perhaps expect this mention to say "… affected food habits and a host of other things".

        • Aargh. Ignore most of the above, I shouldn’t do these sorts of analyses at 1am. I was missing something. The SDs are reported in the 2011 book chapter on the “veterans and Asian food” study that incorporates pretty much all of the 2009 article, and I can see that I made a faulty assumption about the sample sizes. There are still problems in that chapter (e.g., reporting a p of .041 with a pair of stars that the key says means “p < .01"), and claiming to have performed a one-tailed test for an ANOVA is a little irregular (even if it was a 2×2 and so they could have done a t test), but basically my previous comment is mostly wrong. Apologies.

  4. I’ve read a lot of WWII history—and I don’t recall every reading about U.S. women in combat. Perhaps there were one or two times in which field hospitals were overrun and women picked up guns to help defend the site.

    That isn’t to say that I haven’t read about or met women who were in combat in WWII. The Red Army had a significant number of women in combat—even so, Wikipedia says that women were only about 3% of the Red Army. I had a distant cousin who got her share of Germans—but she was an Italian partisan—not U.S. military.

    Bob

  5. I just read this article on Wansink in The Chronicle of Higher Education:

    http://www.chronicle.com/article/Spoiled-Science/239529

    The author wants to discuss the scandal with Wansink, so Wansink suggests they meet at a McDonald’s. Then we get this delicious nugget: “He was drinking a medium-sized diet Coke, which he refilled twice during our interview”. Two refills on a medium? Are there no limits to this guys lack of ethics?

    On a slightly more serious note, it seems like a lot of his research concludes that smaller portion sizes lead to less consumption, but this strategy appears to have failed him here (2 refills!). If he had gotten one supersize diet coke would he have drank less than 3 mediums? Or would he still have refilled twice? This seems like the type of question he studies, so I wonder if this is an actual case of a researcher following recommendations based on his own p-hacked results, to his own detriment!

    • Unfortunately I’ve become so involved in this investigation that I’ve started reading Dr. Wansink’s books. In “Slim by Design” Wansink acknowledges that smaller packages (medium sized cokes) causes 70% of the people to eat less, but 30% to eat more. As you said, if someone wants to drink 2.5 mediums, they will end up drinking 3 mediums instead.

      Andrew: Before you called the popcorn study “barbaric” since the popcorn was two weeks old. In his book Wansink says someone described the popcorn as “rancid Styrofoam”. I still don’t know how that got past the IRB.

    • I presume that the justification would be that *diet* soft drinks don’t count. Apparently the subtle cues about consumption are overridden by cognitive input about the calorific value. Or something.

  6. Btw the books are far better than the show, and i wouldn’t call myself a fantasy fan. The only problem is how infuriating it is when you’ve read all the books and are waiting for GRR Martin to write the next one.

  7. I think I’m losing my mind, I can’t take it anymore.

    In Mindless Eating he says we eat more from a large bowl. But if you have a bunch of small bowls you will eat more from those than you would from a couple large bowls. But if you have a large ziploc bag you will eat more from that than if you had 10 small bags. Someone please make it stop.

  8. Quote from last link: “In addition to his prison term, which will begin on 26 January, Kinion will have to pay $3,317,893 in compensation to the US government.” The investigating body is the Department of Justice. On the one hand, I applaud the effort to uphold the name of science. On the other hand, I can’t help but note that this punishment is much more severe than what happened to the people who caused the world’s economy allegedly to fall to pieces (and then we bailed them out).

  9. Naive question from an outsider. We have auditors for areas like finance and IT, why don’t universities have Research and Experiment auditors? From what I read, there appears to be a significant quality problem in experimental research, most notably in the social sciences. So why not address the quality control problem with appropriately trained auditors? It works in other fields, I don’t see why it wouldn’t help here as well.

    • Nor do I see why it wouldn’t help here as well but making it happen here is an as of yet insurmountable challenge#.

      I and others have suggested it a number of times on this blog.

      # Largely its not perceived to be in anyone’s interest and capability – those capable of taking this forward only likely see the downsides in it – e.g. funding agencies, university deans and chairs, elected governments, un-elected governments, Cochrane Collaboration? [at least they never used to audit their groups meta-analyses publicly], etc.

      Hmm reminds me of my high school essay project on the establishments of a Central Bank in Canada (originally is was set up by the party in power who noticed the opposition was getting some traction from the public and the original governor was chosen and told to do nothing.)

  10. “But if you publish four papers with 150 errors, people will remember that forever.”

    I’ve encountered a number of papers with obvious errors or missing data but got an impression that no one really cares.

    Sometimes you can even find a problematic paper which has connected critical “letters to the Editor”. Then there is just some formal “answer letter” from the authors that don’t really address any of the criticisms.

Leave a Reply

Your email address will not be published. Required fields are marked *