Division of labor and a Pizzagate solution

[cat picture]

I firmly believe that the general principles of social science can improve our understanding of the world.

Today I want to talk about two principles—division of labor from economics, and roles from sociology—and their relevance to the Pizzagate scandal involving Brian Wansink, the Cornell University business school professor and self-described “world-renowned eating behavior expert for over 25 years” whose published papers have been revealed to have hundreds of errors.

It is natural to think of “division of labor” and “roles” as going together: different people have different skill sets and different opportunities so it makes sense that they play different roles; and, conversely, the job you do is in part a consequence of your role in society.

From another perspective, though, the two principles are in conflict, in that certain logical divisions of labor might not occur because people are too stuck in playing their roles. We’ll consider such a case here.

I was talking the other day with someone about the Pizzagate story, in particular the idea that the protagonist, Brian Wansink, is in a tough position:

1. From all reports, Wansink sounds like a nice guy who cares about improving public health and genuinely wants to do the right thing. He wants to do good research because research is a way to learn about the world and to ultimately help people to make better decisions. He also enjoys publicity, but there’s nothing wrong with that: by getting your ideas out there, you can help more people. Through hard work, Wansink has achieved a position of prominence at his university and in the world.

2. However, for the past several years people have been telling Wansink that his published papers are full of errors, indeed they are disasters, complete failures that claim to be empirical demonstrations but do not even accurately convey the data used in their construction, let alone provide good evidence for their substantive claims.

3. Now put the two above items together. How can Wansink respond? So far he’s tried to address 2 while preserving all of 1: he’s acknowledged that his papers have errors and said that he plans to overhaul his workflow but at the same time had not expressed any changes in his beliefs about any of the conclusions of his research. This is a difficult position to stand by, especially going forward when questions about the quality of this work. Whether or not Wansink personally believes his claims, I can’t see why anyone else should take them seriously.

What, then, can Wansink do? I thought about and realized that, from the standpoint of division of labor, all is clear.

Wansink has some talents and is in some ways well-situated:
– He can come up with ideas for experiments that other people find interesting.
– He’s an energetic guy with a full Rolodex: he can get lots of projects going and he can inspire people to work on them.
– He’s working on a topic that affects a lot of people.
– He’s a master of publicity: he really cares about his claims and is willing to put in the effort to tell the world about them.

On the other hand, he has some weaknesses:
– He runs experiments without seeming to be aware of what data he’s collected.
– He doesn’t understand key statistical ideas.
– He publishes lots and lots of papers with clear errors.
– He seems to have difficulty mapping specific criticisms to any acceptance of flaws in his scientific claims.

Putting these together, I came up with a solution!
– Wansink should be the idea guy, he should talk with people and come up with ideas for experiments.
– Someone else, with a clearer understanding of statistics and variation, should design the data collection with an eye to minimizing bias and variance of measurements.
– Someone else should supervise the data collection.
– Someone else should analyze the data.
– Someone else should write the research papers, which should be openly exploratory and speculative.
– Wansink should be involved in the interpretation of the research results and in publicity afterward.

I made the above list in recognition that Wansink does have a lot to offer. The mistake is in thinking he needs to do all the steps.

But this is where “division of labor” comes into conflict with “roles.” Wansink’s been placed in the role of scientist, or “eating behavior expert,” and scientists are supposed to design their data collection, analyze their data, and write up their finding.

The problem here is not just that Wansink doesn’t know how to collect high-quality data, analyze them appropriately, or accurately write up the results—it’s that he can’t even be trusted to supervise these tasks.

But this shouldn’t be a problem. There are lots of things I don’t know how to do—I just don’t do them! I do lots of survey research but I’ve never done any survey interviewing. Maybe I should learn how to do survey interviews but I haven’t done so yet.

But the “rules” seem to be that the professor should do, or at least supervise, data collection, analysis, and writing of peer-reviewed papers. Wansink can’t do this. He would better employed, I think, by being part of a team where he can make his unique contributions. To make this step wouldn’t be easy: Wansink would have to give up a lot, in the sense of accepting limits on his expertise. So there are obstacles. But this seems like the logical endpoint.

P.S. Just to emphasize: This is not up to me. I’m not trying to tell Wansink or others what to do; I’m just offering my take on the situation.

69 thoughts on “Division of labor and a Pizzagate solution

  1. Hi Andrew –

    Couldn’t agree with you more on this one. In some ways, I think this is where the corrosiveness of the single-author/first-author mentality that pervades the social sciences and other fields (e.g. public health) really comes through.

    Often, I feel like I can add more value to a project as a middle author by chiming in on one specific part of the methods or interpretation of results than if I had been the first author and been bogged down in the entire process from idea to analysis to writing. But there is such a high premium placed on being the idea-generator that the roles involved in measurement and the more mechanical pieces of the work become totally devalued.

    I think this is particularly true if you’re identified with one of these fields that fetishize first and (worst of all!) solo author work specifically, i.e. as a sociologist, epidemiologist, etc., then you are supposed to be a fountain of original project ideas above all else, even if it is the measurement, interpretation, etc. that tells us if those ideas have any merit.

    Not sure when/if that incentive structure will ever shift (and of course, having good ideas is essential and important!) to value the third author in proportion to their true contributions, rather than as a tiny fraction of the first author, if that makes sense?

    • Solo work is important for certain kinds of research and writing–especially in the humanities, where individual insight can go much farther than that of a group.

      I would much rather read Northrop Frye, for instance, than anything created by a “Frye team.”

      Of course, people in the humanities regularly employ assistants and sometimes co-write papers and books–but even so, the singular voice and perspective plays an essential role.

      But in the social sciences, the cult of the solo “expert” really does damage, for the reasons that Andrew points out. Few people are good at all aspects of the work; it makes more sense to divide the labor according to the knowledge and skills of the contributors.

      This would also involve turning publicity away from individual “experts” and toward the research itself. Overblown ideas, and their accompanying hype, would deflate to appropriate size. There’s no fun in an overblown idea if it doesn’t bring someone fame. It just gets embarrassing, as it should.

      • “This would also involve turning publicity away from individual “experts” and toward the research itself.”

        In an ideal world, publicity would indeed be about the research itself, not about individuals. (Sadly, human nature often seems to work in the opposite direction of the ideal world.)

      • I’m not sure the analogy holds: you would never rely on one literary critic for interpretation of a work or an author or a genre or an era—many an exhalted name has been dethroned on grounds of scholarship, factual knowledge, and textual exegesis. Ellman’s work on Joyce has not stood the test of team Joyce, if we take team Joyce to be subsequent scholars in the field building on each others’ work. The solitary genius usually pays a quotidian debt to his or her colleagues in the department of individual insight.

        Of course, such debts can come with selection bias: how many literary theorists think about psychology only through Freud or Lacan or their followers; how much literary theoretical relativism or theories of language simply ignored analytic philosophy or linguistics?

        • I see your point but think it may have a hint of a “propter hoc ergo cum hoc” fallacy (I made this term up, I think). The fallacious assumption is “Because of X, therefore in collaboration with X.” That is, of course literary theorists–and writers of literature, for that matter–build on the work of those who came before them. That does not mean that they and their predecessors form a team. Nor does it mean that a team approach would necessarily benefit their work.

          Nor is it necessarily true that one scholar’s or writer’s work becomes obsolete when others build upon or refute it. The overarching theory may break down, but the individual insights and phrases often stay. For example, some early-twentieth-century Russian literary critics, countering “socialist realist” dogma, asserted that Nikolai Gogol’s work had no social commentary whatsoever–that it was all style, structure, play, and performance. While their theory showed some weaknesses over time, their observations *about* his style, structure, play, and performance stayed keen and important. I read Boris Eikhenbaum’s essay on “The Overcoat” with admiration.

    • This misses the point – if my own observations are anything to go by. Here they are:

      1. Most people in the social sciences, medicine, etc. want to publish statistical work, but hate doing statistics and are not very good at it.

      2. So, as soon as they can, they get rid of doing statistics. This also means they keep supervision at a minimum.

      3. Also, they are not very good at supervising others doing statistics, given that they are not very good at statistics.

      4. Who gets the grunt work of doing statistics? The grad student. He doesn’t like being the data slave, either, and would rather do the ideas part. But what’s he gonna do?

      5. Once he becomes a professor himself, first thing he’ll do is shift the data analysis onto his grad students.

      6. Etc., etc.

      The bottom line is that generating and discussing ideas is much easier for most people in academia than doing data analysis; at the same time, most other people value ideas highly, given that they can understand them. As a consequence, there is both too little supply of people who are good at data analysis and too little demand.

      • Unfortunately, this mentality results in analysts/statisticians being viewed as merely number crunchers, i.e. glorified calculators processing data in a context-free manner. In medicine and health, biostatisticians are often viewed people who do work that is ancillary to the science, and not as scientists themselves. Here biostatisticians are seen as something analogous to IT support, and not as the kind of scientists who generate their own ideas or as individuals who might potentially lead collaborative projects. Like with IT guys, there is little expectation of reciprocity. For example, if a biostatistician helps an epidemiologist with his/her work, there is little expectation that the epidemiologist will help out on a statistician-led project in return.

        The problem with this mentality is two-fold. First, it creates promotion-problems for biostatisticians who are often hired in tenure-track positions. Without leading collaborative health-related research, the biostatisticians often have a hard time meeting institutional tenure requirements. Secondly, and this is especially true with observational studies, the analysis often involves creation of models that are bubbling over with contextual assumptions. Scientists outside of statistics are often not fully aware of the modeling (not just estimation) issues involved in analysis, and of the importance of getting the substantive, scientific elements of the model right (or a least sensible).

        I understand the general concept of division of labor here, but a lot of biostatisticians I know are advocating, for the kind of reasons stated above, for a more integrated and less silo’ed approach to modeling, analysis and science in general.

      • “The bottom line is that generating and discussing ideas is much easier for most people in academia than doing data analysis; at the same time, most other people value ideas highly, given that they can understand them.”

        I agree. So to connect with what Andrew says: Wansink’s strengths are in the easy stuff, where there is an oversupply of people willing and at least somewhat able to engage themselves.

        “As a consequence, there is both too little supply of people who are good at data analysis and too little demand.”

        There is an intellectual demand for people who are good at data analysis, but this has not carried over to a demand (or incentives) in the academic/scientific marketplace.

  2. – He runs experiments without seeming to be aware of what data he’s collected.
    – He doesn’t understand key statistical ideas.
    – He publishes lots and lots of papers with clear errors.
    – He seems to have difficulty mapping specific criticisms to any acceptance of flaws in his scientific claims.

    Shouldn’t peer review be catching this stuff? If I was someone who relied on the peer review heuristic, I would be forced to throw out everything in those journals.

    • Jack:

      Wow, Ted’s really going all in on this one. I’m reminded of the poker maxim: fold or raise, never call. In this case it could be good public relations strategy but it’s bad science.

    • Something is wrong with that TED interview. First of all, it is titled “Inside the Debate,” etc., but doesn’t present a debate at all.

      First, the interviewer welcomes Cuddy to reframe (i.e. utterly change) the hypothesis that served as the basis for the TED talk.

      Second, Cuddy invokes the concept of priors, but not convincingly: “Our a priori hypotheses about hormones were grounded in what we knew in 2009 based upon studies of humans and non-human primates about the relationships among testosterone, cortisol and power: individuals who possess higher status tend to have higher testosterone and lower cortisol,” etc.

      This is logically off; the observation that “individuals who possess higher status tend to have higher testosterone and lower cortisol” does not lead to the inference that expansive postures will result in higher testosterone levels. The disconnect is striking.

      Then she implies that the errors in the 2012 paper were products of the era–that less was known at the time about research methodology. This is only partly true; some of the flaws were identifiable as flaws at the time–and would have been even years earlier.

      And then the interviewer commiserates with her about those nasty and unfair critics, and she responds:

      “Science requires criticism of ideas — it is essential. Without it, science would not survive, and when you choose to become a scientist, you are accepting that your work will be criticized. Papers get rejected, findings are scrutinized, methods are challenged and ideas are disputed. Theoretical refinement, knowledge and new ideas spring from this kind of criticism. But there’s a difference between attacking scientific ideas and attacking scientists,” etc.

      The problem here–which neither the interviewer nor Cuddy acknowledges–is that the strongest and most important criticisms–right here on this blog, in the Chronicle of Higher Education, and in New York Magazine–have *not* been ad hominem. They have simply been sharp–and in support of scientific uncertainty and investigation.

      I give Cuddy credit for admitting, at this point, that the hormone hypothesis has little or no support. But overall this piece evades the problems with the 2012 study and the TED talk. If you tell your audience that you are going to offer a “life hack” and then, at the end, tell them to “share the science,” the very least you can do is come back and say that life hacks and science are two different things.

      • Diana:

        Yah, it’s a horrible interview. My favorite is this bit from Cuddy:

        I’ve heard from three different labs that have conducted research on ‘power posing’ but who said they feel they cannot submit the work to journals.

        They could publish these blockbuster results on Arxiv, no? Or maybe PPNAS? I’m sure PPNAS’s social psychology editor would give these submissions a fair hearing.

        • To be fair, this is a straw man argument. She is referring to fear of politically-oriented backlash, not of researchers inability to upload manuscripts to websites or of a dearth of outlets for such research.

        • Bean:

          If she’s referring to fear of politically-oriented backlash, that’s just ridiculous. A high-quality paper on power pose or whatever would get lots of respect. I just doubt these research she’s discussing is of high quality. Of course I have no idea, but my default belief when people refuse to share their data or results is that the work is of low quality.

        • Andrew, Diana, I think she could have done much worse. She could have gone the Fiske route. At least she admitted a few things. Pity she didn’t give any credit to the lead co-author Carney, who seems to have been completely forgotten by Cuddy, and who was the only one in this entire story who had the integrity to set the story straight.

          Putting aside the fiasco aspect of this story, I find it very strange that a relatively senior author (Cuddy) takes *all* the credit for a paper that has a relatively junior colleague as first author.

  3. Andrew:

    I am wondering whether you can opine on a somewhat related phenomenon. Suppose a smart graduate student is exposed to the current literature on some topic of interest, and identifies what he or she considers a lacunae therein (and who is better at doing that than a smart young person looking at a problem with fresh eyes?). She / he goes to his / her advisor and says, “I think there might be a key independent variable that is not accounted for in the literature.” The advisor says, “you might be right, but the data will be very hard to come by, esp. without a big budget, etc.” Moreover, no journal will publish the student’s observation unless he or she is able to test it empirically. So, he or she moves on to something else (I get the sense that that is driving much of the seemingly borderline trivial work that is being done in economics), and the student’s insights are lost. So, why not a journal devoted to “important questions that someone should research if they can get the data”?

  4. > scientists are supposed to design their data collection, analyze their data, and write up their finding.

    I strongly disagree with this. This is false in many fields, including medicine and others, where some people are experts on the clinical aspects, and they collaborate with data analysis experts etc. It’s also possible to think of this as a collaboration between theorists and empiricists.

    I wonder if you think the “supposed to” attitude quoted is destructive to the progress of scientific understanding.

  5. Another thought: Even if labor is divided, someone in the research group should (ideally) understand all aspects of the project and take responsibility for anything that goes wrong. Otherwise, it’s all too easy for one person to pass off an error on another (“I’m just the idea guy. I don’t deal with regression discontinuity analysis” or “Our data collector is no longer involved with the project”). Someone should be there to supervise the project as a whole. It just doesn’t have to be the publicity person (and probably shouldn’t be).

  6. This sorting of roles occurs naturally in many businesses – certainly in consulting where organizational flexibility is a strength. To be successful, consulting organizations channel individuals into roles that play to their strengths and minimize the impact of their weaknesses. Often the description of a particular job is defined based on the person in that job to fit that person’s array of talents and contributions, rather than the other way around. Teams are built based on individual strengths and team members feel confident in their roles because they just have to be themselves.
    If, in academia, each person needs to rise as a star along a similar path and be the lead author, that is certainly a problem.
    This people problem seems complementary to the work quality issue – lack of agreement on the importance of strong peer review, or even a definition of what constitutes adequate peer review.
    Large consulting organizations in my field, pension actuarial consulting, had week peer review for many years. Then the multi million dollar lawsuits and judgments convinced the companies that the extra cost of strong peer review was worth it.

    • Chris:

      This can be a big problem in academia. For example, statistics departments are full of theorem-provers who gamely try to do the best they can with applied statistics and computing which is what most of their students can really use; but there’s very little room in statistics academia for brilliant people such as Hadley Wickham who can transform the field through programming.

  7. Off-Topic, but related in form to earlier topics like the White Death: Here’s an alarming but largely unexplained trend that might be of interest to statistical experts to take a crack at unraveling: traffic deaths in 2016 in the United States were 14% higher than in 2014. That’s roughly 5,000 more people killed on the roads.

    Some of it is more traffic miles being driven, but there seem to be a wide range of possibilities to account for the rest of this unexpected change. I offer a couple of possibilities and my commenters dozens of others, but what the real answers are, I don’t know.

    http://www.unz.com/isteve/is-the-traffic-death-spike-like-the-homicide-increase-also-a-ferguson-effect/

    • A “Ferguson Effect”? For real dude?! And, of course, somehow we loop around to blame Obama…Jesus man, have you no shame? I have noticed, as has almost everyone I’ve talked to about this, that there is a very noticeable up-tick in people staring into their lap while driving. If I had to guess, they were being distracted by, I don’t know, maybe THEIR PHONES – but who knows? Maybe if they are alt-right white folks, the better hypothesis now is that they are having hallucinations about BLM protestors, or are being tormented by images of Barack HUSSEIN Obama, or are engrossed in visualizing their new Alt-Fact-Trumplandia history book projects in which there were hundreds (or was it thousands?) of Muslims cheering in NYC when the towers fell…But what the real answers are, I don’t know.

  8. I’m not the first one to point that you display a strange mixture of affability and cantankerousness, Andrew. It’s sometimes hard to tell when you’re being sincere, when you’re being ironic, and when you’re being outright sarcastic. (As you often point out, that’s hard to tell in writing at the best of times.)

    Surely, you can’t expect a tenured professor to just concede that he’s no good at “designing data collection with an eye to minimizing bias and variance of measurements, supervising data collection, analyzing data, and writing research papers” but that he’d be happy to help out in other ways, like “talking with people and come up with ideas for experiments” and “interpreting research results” and getting “publicity afterward”.

    The question is: can you imagine a career developing from the PhD level within the confines of this skill set? And to reach the level Wansink is at today?

    My view is increasingly (and sadly) that many of the errors we are talking about under the rubric of “the replication and criticism crisis” could not be committed by people who were sincerely trying to figure out how the world works. Rather, they are committed by people who are earnestly trying to make an academic career (and help their students make their careers) and who (perhaps sincerely, I’ll grant) assume that if they get past the reviewers they’ve passed some godlike filter of approval.

    I don’t think science can be done in this way. The first responsibility of a research project is to satisfy the curiosity of the researcher. I don’t think people quite get how incurious the majority of scientists are these days. (Again, let me add a “sadly” in parentheses.)

    • Thomas:

      I don’t know I should expect Wansink to concede that he’s no good at designing data collection with an eye to minimizing bias and variance of measurements, supervising data collection, analyzing data, and writing research papers but that he’d be happy to help out in other ways, like talking with people and come up with ideas for experiments and interpreting research results and getting publicity afterward. But I think that’s what he should do.

      And he’s tenured. Not being any good at designing data collection etc. is not a firing offense. The guy’s only 56 years old; why not spend the next 10 or 15 years of his career making the best use of his talents?

      P.S. My post above is 100% sincere. It may sound like a joke because I’m offering unconventional advice, but I’m being completely sincere.

    • Thomas:

      I am aware on an academic who made a similar kind of move, excellent at A but really bad at B (which apparently was widely known) but only after somewhat ruthlessly maneuvering to keep going had a serious heart attack.

      After the heart attack, they resigned from the positions that required B skills and kept going seemingly successfully.

      Though I certainly agree with “not be committed by people who were sincerely trying to figure out how the world works” – I belief its committed largely by folks being careerists.

    • “The first responsibility of a research project is to satisfy the curiosity of the researcher.”

      I’m not entirely convinced of this. When the question studied is one that has real world implications, I would say that the first responsibilities of the researcher are to be intellectually honest and to consider possible consequences of what they do and publish.

      “I don’t think people quite get how incurious the majority of scientists are these days. (Again, let me add a “sadly” in parentheses.)”

      I’m not convinced that lack of curiosity is the problem. I’d say it’s more a penchant for some kind of “closure” or definitiveness, rather than keeping open to uncertainty.

      • “The first responsibility of a research project is to satisfy the curiosity of the researcher.”

        To satisfy curiosity is only one reason to do experiments. Sidman (Tactics of Scientific Research; 1960) gives several (yes, to test hypotheses is also one…but only one of several).

    • Martha, Thomas, etc.:

      I have no reason to suspect that Wansink is lacking in curiosity. He just has a poor research method. You could try to learn about the world by reading tea leaves or tarot cards or running ESP experiments and have genuine curiosity about what you’ll see next. But there’s still a lack of correspondence between your data and the underlying reality that you think you’re studying.

      For Wansink and researchers like him, I think that one aspect of Thomas’s comment is accurate: Wansink etc. are playing by the rules, and they assume that if they do well, that they’re progressing. Kind of like a student who focuses on getting good grades, on the theory that if you get good grades, you’ll learn the material as a byproduct. The trouble is that Wansink inadvertently (I think) happened on to a way to hack the grading system, as it were, and go straight to the good grades and the honor roll and the academic prizes, without ever actually learning the material.

      At this point, though, it wouldn’t take Wansink much effort to realize the “emperor” of his research method has no clothes, so, again, why not make the most of the next 10 or 15 years of his career.

      What really saddens me is those ovulation-and-clothing researchers who’ve struggled so hard to avoid the realization that their research methods are no good for learning about their object of study. These people are so young! They have decades of work life ahead of them. It’s really too bad, and I get really angry at some of the older researchers in their profession who encourage that sort of clueless attitude.

      • I agree that the problems arise a researcher behaves too much “like a student who focuses on getting good grades”. I guess we shouldn’t be surprised if a lot of academics actually were such students. Maybe we need to change the grading systems so only truly curious people actually feel they’re succeeding in university.

        In any case, I was trying not say that Wansink specifically isn’t curious. (I don’t know.) I’m trying to offer a general explanation for all this dubious research we’re seeing.

        • Thomas:

          I’m only guessing on this one, but . . . based on what he’s written about his career, Wansink doesn’t strike me as a star student. Rather, he strikes me as an OK student who felt like he discovered the trick to acing exams. And he’s so excited about this that he wants to share the trick with others. In his post that got things started, Wansink emphasized the following steps to success: hard work, collaboration, and never giving up. The one thing he forgot to mention was sloppiness. Sloppy design, data collection, and analysis are key parts of Wansink’s formula: with more careful experiments he’d never be able to spew out statistically significant results at such a rapid clip.

        • Yes, this is pretty much the way I see it. But there’s also the underlying/surrounding culture of “That’s The Way We’ve Always Done It”. That fits into into your student analogy as grading on the basis of exams that come from the textbook question bank — little or nothing that prepares for good research practice.

        • A few years ago I began to suspect that “the curious, deep-thinking type [of student/researcher] is in danger of being crowded out by the ambitious, hard-working type.” I think care in the design and execution of research follows naturally from genuine curiosity%mdash;i.e., from it mattering to the researcher whether the result mean anything.

          Mistakes and even incompetence are still possible. But sloppiness (which is something other than inability and error) undermines the intrinsic pleasure of inquiry. If you know you didn’t collect and analyze your data carefully, your results won’t matter to you. But they may still seem publishable. And you may still submit them. And if you get away with it then, yes, you appear to have discovered a “trick”.

          The people who spew this stuff out are getting jobs that more careful, curious researchers aren’t getting. The only solution, I think, is to get rid of “publish or perish” and start just hiring smart, curious people on the basis of direct assessments of their “minds” (i.e., read their work closely, talk to them, etc.)

  9. Turns out this is already the case. I spoke with a Dyson school colleague of Wansink here at Cornell, and he told me that much of the data analysis is actually done by David Just, who despite being the first author of two of the four papers that were mentioned in Wansink’s blog, is virtually never mentioned in the “pizzagate” context. He is the economist in the lab, and the “data guy”, yet he thinks because Wansink generates all the publicity he will come out of this fiasco unscathed.

      • I think Green’s main involvement with LaCour was as the not-doing-very-much-actual-writing-of-the-articles senior author. It must be embarrassing when your grad student lets you down in such cases (cf. Warren Buffett’s comment about being able to see who’s been skinny-dipping when the tide goes out), but there are plenty of other senior authors who will have thought “There but for the grace of” when they read about it. And Green seems to have done the right thing immediately when LaCour’s faking came to light.

      • Anon and Rahul: My understanding is that Donald Green lost $200,000 in Carnegie grant funding for his research as a result of his involvement in the LaCour scandal. David Just does not have a degree in economics but in “agricultural and research economics.”

    • Anon:

      You write, “Turns out this is already the case.” But I didn’t just suggest that Wansink delegate these tasks (designing the data collection, supervising the data collection, analyzing the data, writing it up); I also suggested that these tasks should be done by someone who is competent! Whoever has been doing this for Wansink’s group—whether it be David Just or anyone else—is massively incompetent, as can be seen from the “12 slices of pizza” thing, the carrot data that don’t add up, the 150+ errors, etc. Division of labor only works if somebody, somewhere, knows what to do!

    • Jordan:

      The irony is that all this happened because of Wansink’s blog post from last year, which I take to have been a legitimately public-spirited impulse.

      Of course, any good intentions on Wansink’s part do not resolve the problems of a stunningly sloppy workflow. Really, I’ve never seen anything like this concentration of errors since Richard Tol dined alone, indeed I suspect that Diederik Stapel had more correct data points by accident than Wansink did on purpose. So it’s good that Nick etc are pointing this out. It’s just kinda amazing that had Wansink kept his mouth shut, that he might still be treated as a serious scientist by institutions ranging from Cornell University to NPR to the U.S. government.

        • Jordan:

          This came up in an earlier thread. “Cornell News” has no association with Cornell University; it appears to be some sort of bot, maybe some attempt to grab clicks from suckers. (I have no idea how the spam economy works, despite being related to this guy.)

        • Jordan:

          I followed the link, where Jesse Singal recounts the latest Pizzagate revelations, and then he (Singal) writes, “How such similar results could emerge from two distinct studies, and two distinct samples, remains unexplained. . . . this is entirely bizarre. It’s a really, really hard thing to explain. . . .”

          The explanation seems simple to me: it looks like Wansink’s workflow is a mess and has been so for a long time. He has no control over his data or his analyses or what gets published. With different people gathering data and taking credit for data at different times, and other people performing the analyses, and still others submitting the papers to journals, it seems completely plausible that the same data could be labeled in different ways, that the same work could be published in different places with no citations, etc. Does not seem like a really hard thing to explain at all!

          I mean, Bruno Frey is Dr. Moriarty compared to this guy.

        • I agree this is likely an explanation for the current problems, but this data duplication occurred all the way back in 2003, and Wansink is sole author on the paper. I would like to give him the benefit of the doubt, but I don’t know what to think about this case.

        • Jordan:

          Even in 2003 it’s possible he had other people writing and submitting his papers without getting credit, or he was disorganized and had no idea what he was writing, or he liked the idea of getting multiple papers on his C.V. and he thought this was the easiest way to do it. Again, I don’t really see it as being such a mystery. Sure, we’ll never know exactly what happened, but at this point I think we have a pretty good picture of the guy’s M.O.

        • A very revealing article was written by The Cornell Daily Sun:
          http://cornellsun.com/2017/03/08/cornell-food-lab-conducting-internal-review-after-report-alleging-150-data-inconsistencies/

          It appears Wansink issued a statement on Tuesday. I can’t find the statement online, so I have to assume it was internal to Cornell.

          Wansink acknowledges that he did indeed reuse the same paragraphs multiple times. So it appears he was aware of what he was doing, and appears that he himself wrote the papers.

        • (This is meant to be in reply to Jordan’s latest comment, but I guess the blog software has decided these replies are nested too far down now.)

          That article from the Cornell Sun links to an updated statement from Dr. Wansink in which he acknowledges having duplicated text in several cases. It also contains what appears to be an extraordinary claim about the near-duplicated result tables:

          “…a master’s thesis was intentionally expanded upon through a second study which offered more data that affirmed its findings with the same language, more participants and the same results.”

          Dr. Wansink appears to be claiming that two experiments led to tables of results in which 39 out of 45 results were identical to either one (17 cases out of 18) or two (22 cases out of 27) decimal places.

Leave a Reply

Your email address will not be published. Required fields are marked *