How to think about research, and research criticism, and research criticism criticism, and research criticism criticism criticism?

Some people pointed me to this article, “Issues with data and analyses: Errors, underlying themes, and potential solutions,” by Andrew Brown, Kathryn Kaiser, and David Allison. They discuss “why focusing on errors [in science] is important,” “underlying themes of errors and their contributing factors, “the prevalence and consequences of errors,” and “how to improve conditions and quality,” and I like much of what they write. I also appreciate the efforts that Allison and his colleagues have done to point the spotlight on scientific errors in nutrition research, and I share his frustration when researchers refuse to admit errors in their published work; see for example here and here.

But there are a couple things in their paper that bother me.

First, they criticize Jordan Anaya, a prominent critic of Brian “pizzagate” Wansink, in what seems to be an unfair way. Brown, Kaiser, and Allison write:

The recent case of the criticisms inveighed against a prominent researcher’s work (82) offers some stark examples of individuals going beyond commenting on the work itself to criticizing the person in extreme terms (e.g., ref. 83).

Reference 83 is this:

Anaya J (2017) The Donald Trump of food research. Medium.com. Available at https://medium.com/@OmnesRes/the-donald-trump-of-food-research-49e2bc7daa41. Accessed September 21, 2017.

Sure, referring to Wansink as the “Donald Trump of food research” might be taken to be harsh. But if you read the post, I don’t think it’s accurate to say that Anaya is “criticizing the person in extreme terms.” First, I don’t think that analogizing someone to Trump is, in itself, extreme. Second, Anaya is talking facts. He indeed has good reasons for comparing Wansink to Trump. (“Actually, the parallels with Trump are striking. Just as Trump has the best words and huge ideas, Wansink has ‘cool data’ that is ‘tremendously proprietary’. Trump’s inauguration was the most viewed in history period, and Wansink doesn’t p-hack, he performs ‘deep data dives’. . . . Trump doesn’t let facts or data get in his way, and neither does Wansink. When Plan A fails, he just moves on to Plan B, Plan C…”).

You might call this hyperbole, and you might call it rude, but I don’t see it as “criticizing the person in extreme terms”; I think of it as criticizing Wansink’s public actions and public statements in negative terms.

Again, I don’t see Anaya’s statements as “going beyond commenting on the work”; rather, I see them as a vivid way of commenting on the work, and associated publicity statements issued by Wansink, very directly.

One reason that the brief statement in the article bothered me is that it’s easy to isolate someone like Anaya and say something like, We’re the reasonable, even-keeled critics, and dissociate us from bomb-throwers. But I don’t think that’s right at all. Anaya and his colleagues put in huge amounts of effort to reveal a long and consistent pattern of misrepresentation of data and research methods by a prominent researcher, someone who’d received millions of dollars of government grants, someone who received funding from major corporations, held a government post, appeared frequently on television, and was considered an Ivy League expert. And then he writes a post with a not-so-farfetched analogy to a politician, and all of a sudden this is considered a “stark example” of extreme criticism. I don’t see it. I think we need people such as Anaya who care enough to track down the truth.

Here’s some further commentary on the Brown, Kaiser, and Allison article, by Darren Dahly. What Dahly writes seems basically reasonable, except for the part that calls them “disingenuous.” I hate when people call other people disingenuous. Calling someone disingenuous is like calling them a liar. I think it would be better for Dahly to have just said he thinks their interpretation of Anaya’s blog post is wrong. Anyway, I agree with Dahly on most of what he writes. In particular I agree with him that the phrase “trial by blog” is ridiculous. A blog is a very open way of providing information and allowing comment. When Anaya or I or anyone else posted on Wansink, anyone—including Wansink!—was free to respond in comments. And, for that matter, when Wansink blogged, anyone was free to comment there too (until he took that blog down). In contrast, closed journals and the elite news media (the preferred mode of communication of many practitioners of junk science) rarely allow open discussion. “Trial by blog” is, very simply, a slogan that makes blogging sound bad, even though blogging is far more open-ended than the other forms of communications that are available to us.

In their article, Brown, Kaiser, and Allison write, “Postpublication discussion platforms such as PubPeer, PubMed Commons, and journal comment sections have led to useful conversations that deepen readers’ understanding of papers by bringing to the fore important disagreements in the field.” Sure—but this is highly misleading. Why not mention blogs in this list? Blogs have led to lots of useful conversations that deepen readers’ understanding of papers by bringing to the fore important disagreements in the field. And the best thing about blogs is that they are not part of institutional structures.

You also write, “Professional decorum and due process are minimum requirements for a functional peer review system.” But peer review does not always follow these rules; see for example this story.

In short, the good things that can be done in official journals such as PNAS can also be done in blogs; also, the bad things that can be done in blogs can also be done in journals. I think it’s good to have multiple channels of communication and I think it’s misleading to associate due process with official journals and to associate abuses with informal channels of communication such as blogs. In the case of Wansink, I’d say the official journals largely did a terrible job, as did Cornell University, whereas bloggers were pretty much uniformly open, fair, and accurate.

Finally, I think their statement, “Individuals engaging in ad hominem attacks in scientific discourse should be subject to censure,” would be made stronger if it were to directly refer to the ad hominem attacks made by Susan Fiske and others in the scientific establishment. I don’t think Jordan Anaya should be subject to censure just because he analogized Brian Wansink to Donald Trump in the context of a detailed and careful discussion of Wansink’s published work.

The larger problem, I think, is that the tone discussion is being used strategically by purveyors of bad science to maintain their power. (See Chris Chambers heres and James Heathers here and here.) I’d say the whole thing is a big joke, except that I am being personally attacked in scientific journals, the popular press, and, apparently, presentations being given by public figures. I don’t think these people care about me personally: they’re just demonstrating their power, using me as an example to scare off others, and attacking me personally as a distraction from the ideas they are promoting and that they don’t want criticized. In short, these people whose research practices are being questioned are engaging in ad hominem attacks in scientific discourse, and I do think that’s a problem.

That all said, one thing I appreciate about Brown, Kaiser, and Allison is that they do engage the real problems in science, unlike the pure status quo defenders who go around calling people terrorists and saying ridiculous things such as that the replication rate is “statistically indistinguishable from 100%.” They wrote some things I agree with, and they wrote some things I disagree with, and we can have an open discussion, and that’s great. On the science, they’re open to the idea that published work can be wrong. They’re not staking their reputation on ovulation and voting, ESP, himmicanes, and the like.

Andrew Brown, Kathryn Kaiser, and David Allison say . . .

I told David Allison that I’d be posting something on his article, and he and his colleagues prepared a three-page response which is here.

Jordan Anaya says . . .

In addition, Jordan Anaya sent me an email elaborating on some of the things that bothered him about the Brown, Kaiser, and Allison article:

I’m not mad with Allison, he can say whatever he wants about me in whatever media he chooses, as long as it’s his opinion or accurate. I’m not completely oblivious, I knew the title of my blog post would upset people, but that’s kind of the point. I felt the people it would preferentially offend are the old boys’ club at Harvard, so it seemed worth it. Concurrently, I felt that over time the title of my post would age well, so anyone who was critical of the post initially would eventually look silly. The first whistle blower to claim a major scandal is always seen as a little crazy, so I don’t necessarily blame people who were initially critical of my post, but after seeing two decades worth of misconduct from Wansink it would be hard for me to take people seriously now if they think the title is inappropriate. Wansink is clearly a con artist, just like Donald Trump.

I only learned of Allison’s talk because a journalist contacted me. Of course I’m honored whenever my work is talked about at a conference, but if it is misrepresented to such an extent that a journalist has to contact me to get my side of the story that’s a problem. I sort of thought that Allison would regret his statements in the talk given how many additional problems we found with Wansink’s work, but to my surprise he then said the same thing in a publication. I mean, one time is a mistake, but twice is something else.

So I have three issues with Allison’s comments in his talk and paper. First, I don’t agree with his general argument about ad hominem attacks. Second, I don’t think he is being honest in his portrayal of our investigation. Third, I find the whole thing hypocritical.

Going to Wikipedia, there’s a pyramid where ad hominem is defined as “attacks the characteristics or authority of the writer without addressing the substance of the argument”. Ad hominem is not the same as name-calling.

If I call someone a “bad researcher”, maybe that’s name-calling. But if I say “X is a bad researcher because of X, Y, and Z” I don’t know I would consider that name-calling since it was backed up by evidence. Even something like “X is a bad tipper, and he committed the following QRPs” is not ad hominem. Sure, being a bad tipper has nothing to do with the argument, but evidence is still presented. And as I think you’ve discussed, at some point it’s impossible to separate a person from their work. If a basketball player makes a bad play, I would just say he made a bad play. But if he consistently makes bad plays at some point he is a bad player.

Allison doesn’t specifically say my blog post is an example of an ad hominem attack, but it is in a paragraph with other examples of ad hominems. The title of my post could be seen as name-calling, but throughout the post I provide evidence for the title, so I’m not even sure if it’s name-calling. And besides, I’m not sure why being compared to the President of the United States, whom he likely voted for, would be seen as name-calling.

But let’s say Allison is right and my post is extremely inappropriate due to it being unsubstantiated criticism. I think it’s interesting to look at the opposite case, where someone gets unsubstantiated acclaim, a type of reverse ad hominem if you will. Wansink was celebrated as the “Sherlock Holmes of food”. I would classify this as reverse name-calling/ad hominem. If you are concerned about someone’s reputation being unfairly harmed by name-calling, surely you must be similarly concerned about someone gaining an unwarranted amount of fame and power by reverse name-calling.

This might sound silly, but here’s a very applicable example. Dan Gilbert and Steve Pinker both shared Sabeti’s Boston Globe essay on Twitter saying it was one of the best things they’ve ever read, an unsubstantiated reverse ad hominem. Why does this matter? Well the essay was filled with errors and attacked you. So by blindly throwing their support behind the essay (a reverse ad hominem), they are essentially blindly criticizing you.

So if we are going to be deeply concerned about (perceived) unwarranted criticism, then we need to be equally concerned about unwarranted praise since that can result in someone getting millions of dollars in funding and best-selling books based on pseudoscience.

Lastly, what is inappropriate is subjective, so it’s impossible to police. I agree with Chris Chambers here. Sure, if someone calls me an idiot I probably won’t throw a party, but if they then point out problems with my work I’d be happy. I’d rather that than someone say my work is great when it is actually filled with errors. Allison says the only things that matter are the data and methods and “everything else is a distraction”. I agree, so if someone happens to mention something else feel free to ignore it, whether it be positive or negative.

The next problem with his talk/paper is I’m not sure our investigation is being accurately presented. In Allison’s talk, (timestamp 36:30), he says he feels what’s happening to Wansink is a “trial by media” and a “character assassination”. Yes, I’ll admit we used our media contacts, but that’s only because we were unsatisfied with how Wansink was handling the situation. We felt we needed to turn up the pressure, and my blog post was part of that. I know turning to the media is not on Allison’s flowchart of how to deal with scientific misconduct, but if you look at our results I would say it is extremely effective and he may want to update his opinions.

He goes on to use the example of a student cheating on a test and having the professor call them out in front of the class. This analogy doesn’t work for various reasons. First, student exams are confidential, while Wansink’s work and errors and blog are in the public domain. Secondly, when a student does well on a test that is also confidential–the teacher doesn’t announce to the class you got an A. Conversely, Wansink has received uncritical positive media attention throughout his career, so I don’t see any issue with a little negative media attention. We’re back to the ad hominem/reverse ad hominem example. If you have no problems with reverse ad hominems and positive media attention I don’t see how you can be against ad hominems and negative media attention.

Third, this whole thing is filled with irony and hypocrisy. It’s funny to note that the whole thing was indeed started by a blog post, but it was Wansink’s blog post. And in that blog post he threw a postdoc under the bus and praised a grad student. The grad student was identified by name, and it was easy to figure out who the postdoc was (she didn’t want to comment when we contacted her). So if you’re going to be mad about a blog post how about you start there? Wansink not only ruined the Turkish student’s career, he provided information about the postdoc she probably wishes wasn’t made public. If you dislike my blog posts fine, but then you better really really hate Wansink’s post. At the end of his talk he reads a quote from my blog and says that’s not the type of culture he wants to foster. Given his lack of criticism of anything in Wansink’s blog, I guess he prefers a culture where senior academics complain about how their employees are lazy and won’t p-hack enough.

Allison is famous for bemoaning how hard it is to get work corrected/retracted, which is exactly what we faced with his good friend (and coauthor) Wansink. We just happened to then use nontraditional routes of criticism, which were extremely effective. You’d think if someone is writing an article about getting errors corrected, they might want to mention one of the most successful investigations. He might not agree with our methods, but I don’t see how he can ignore the results, and it seems wrong to not mention that this is a possible route which can work.

And as I’ve mentioned before, his article in PNAS is basically a blog post, so it’s funny for him to complain about blogs. And I can’t help but wonder whether he would have singled out my blog post if he wasn’t a friend and coauthor of Wansink. Presumably if he didn’t know Wansink he would have described the case in detail given its scale.

One more time

Again, I’m encouraged by the openness of all the people involved in this discussion. I get so frustrated with people who attack and then hide. Much better to get disagreements out in the open. And of course anyone can add their comments below.

130 thoughts on “How to think about research, and research criticism, and research criticism criticism, and research criticism criticism criticism?

  1. I’ve said this before in comments here, but I think that many people in the replication movement (or whatever you want to call it) including you and Anaya are making a tactical error. There is utterly no reason to call people names or to focus so much negative attention on particular figures.

    First, there’s a kind of inconsistency to it. On the one hand, issues involving the garden of forking paths, small sample sizes, noisy measurements, etc. are extraordinarily widespread (this is perhaps the key claim of the movement). In nearly every area, these were regarded as standard practices until a few years ago (and even now, many researchers accept them as standard practices). This is a claim about a systemic issue that objectively needs to be addressed. That can be approached in an entirely constructive way without calling anyone names.

    On the other hand, many critics target and vilify prominent researchers for using these practices and then publicizing the conclusions (Wansink, Cuddy, etc.). If the first claim of the movement is true and important (and I for one certainly think it is), then why target these people? Thousands of researchers have used similar practices, so why put a target on particular backs (and, yes, you can point to particular features and my sense is that for you personally it’s the media hype that makes them legitimate targets, but still I think it’s impossible to sustain a claim that their conduct is qualitatively different than many (most?) people in those fields). Attacking people may make for fun blog posts, but it creates enemies.

    Moreover, the tendency to attack specific people over and over again creates a specter that looms over others. Many, many people followed equivalent research practices. Seeing the onslaught directed at these people, many other researchers wonder if they’re next. Given the number of people you could attack on roughly the same grounds, this rapidly becomes a substantial proportion of the field. And particularly for people who lack the status of Wansink or Cuddy, careers and livelihoods could literally be on the line. It’s one thing (and a wholly commendable thing) to advocate better research practices. It’s another thing entirely to (implicitly) threaten a significant proportion of the people in entire fields.

    When you threaten people, they resist you. They get defensive. And so this debate gets a lot more complicated than it needs to be. Instead of a constructive debate about how to conduct research moving forward, the debate also involves a kind of contest about retroactive liability for people who conducted research that, at the time, appeared to be solid. From a knowledge cumulation standpoint, it’s perfectly reasonable to say that we should considerably discount work that suffers from these problems. But the morality-play flavor of the attacks on individual researchers is deeply unhelpful.

      • Joe,

        Also, to clarify, my reasons for writing about Wansink etc. are not because I expect their behavior to change. I’m aiming to reach others who have not yet committed to a dead paradigm.

        Regarding your point about writing about individuals, see this post where James Heathers argues that “Criticizing the system is not a plan” and why criticism ultimately needs to focus on individual cases.

      • Sorry, maybe that wasn’t clear. I didn’t mean threatening people in the sense of issuing threats.

        I meant that your behavior is threatening to people because many researchers might worry that they will be the next target of such criticism.

        • >I meant that your behavior is threatening to people because many researchers might worry that they
          >will be the next target of such criticism.

          I can’t speak for Andrew, but I don’t wake up every morning looking to find random papers from unknown junior authors and point out that this result is implausible or that one is obviously p-hacked. For one thing, nobody would care, and for another, there would indeed be some unfairness in picking on p-hacking postdoc #25843 rather than any other. But if a researcher has published dozens or even hundreds of faulty papers, or repeatedly taken handsomely fees for making claims to a general audience that are totally unsupported by their research after those deficiencies have been pointed out multiple times in the literature, or recycled work as if it were original on an industrial scale, and that researcher is in the public eye as a spokesperson for Science(tm), then perhaps they *should* worry that they will be the next target of criticism.

          The authors of the False-Positive Psychology paper recently made a comment to the effect that they always kind of knew that p-hacking, etc, was wrong, but that they decided to open up about it when they realised that it wasn’t wrong in the way it’s wrong to jaywalk; rather, it’s wrong in the way that it’s wrong to rob a bank. If persistent bank robbers are starting to sleep a little less well at night, that’s fine by me.

    • Joe,
      I don’t see anything anywhere that resembles threats. My understanding is that when Andrew raises problems with specific papers and people, rather than a whole literature, it is in part because those people have a lot of influence and-or visibility. Think of how many people and firms are buying into power pose, spending lots of money on it (Ted talks, seminars, books), or think of the USDA spending tens of millions changing food policies in schools based on Wansink’s work. This stuff can have a HUGE impact. If it’s wrong, if it’s wasteful, someone should point it out.

      • So, one way of looking at this is that Wansink by going so public has made himself a target.

        Another way of looking at it, from the perspective of a more junior, more vulnerable researcher is: “If they can go after someone as powerful as Wansink or Cuddy, what’s to stop them from going after me?” Everyone’s research has flaws. Everyone’s. If you’re untenured, an attack on your work (which let’s assume was done with the best of intentions) could mean you lose your job.

        The biggest problem of all, I think, is the conflation of poor statistical practice with fraud. These attacks often cross the line from legitimate statistical critique to impugning someone’s motives. I have no problem with someone constructively pointing out holes in my work. In many ways, I’d be glad to have that attention directed at my work. I don’t, however, want that attention directed at *me* or the kind of attacks that suggest I did something wrong (in an ethical rather than statistical sense).

        • > The biggest problem of all, I think, is the conflation of poor statistical practice with fraud. These attacks often cross the line from legitimate statistical critique to impugning someone’s motives. I have no problem with someone constructively pointing out holes in my work. In many ways, I’d be glad to have that attention directed at my work. I don’t, however, want that attention directed at *me* or the kind of attacks that suggest I did something wrong (in an ethical rather than statistical sense).

          The problem is… it’s unethical to pursue publications through substantially poor statistical practices. It’s unethical to do science without trying *hard* to not fool yourself. From Feynman’s famous “cargo cult” speech:

          It is interesting, therefore, to bring it out now and speak of it explicitly. It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

          Details that could throw doubt on your interpretation must be given, if you know them.

          So, poor statistical practice is itself unethical if it’s carried out consistently over long periods of time… you have the opportunity to do better, and you don’t. That’s unethical, like a lawyer who suspects that some gangster is fixing his juries with “ringers” who will always vote to acquit, but doesn’t say anything to anyone about it.

        • Sure, it’s unethical to *knowingly* use poor methods for whatever reason.

          But what I’m talking about is a series of *retroactive* attacks. Many of the studies that have become targets of the reproducibility movement are quite old.

          Consider a researcher who, say, in 2006 conducted an experiment with a fairly small sample size. The main effect wasn’t there, but they dug into the data a bit and found a theoretically plausible interactive effect. The researcher wrote it up honestly, and it was published. Other people found the work interesting and cited it. Since then, best practices have evolved. The researcher wouldn’t do things that way anymore.

          But what obligation does this create with respect to the past? Issue a correction or retraction? On what grounds exactly? What would it say? “In retrospect, Researcher X tested a few specifications and believes that, if the results had come out differently, s/he would have looked at more specifications or adopted different measures”? The fact that there were methodological flaws doesn’t mean the conclusions are *wrong*, it just means that the support for them is weaker than we might have thought.

          Like, there’s an unambiguous answer here. Do replications. We should replicate past work that led to important conclusions, even if that work appears to have been statistically flawless (and especially if we can see problems in light of things we now know).

          But when you set up these attacks on particular people (and let’s not kid ourselves, it’s often the people being attacked not just the work), then replication becomes a confrontational thing instead of being something that we can all agree is good for scientific progress. Instead of seeing a failed replication as evidence that sometimes promising ideas don’t pan out, failed replications are often presented as evidence that the underlying work was somewhere between shoddy and dishonest.

        • Joe

          Yes it may not be simply critiques about the work. Underlying grievances can set the frame and tenor for otherwise legitimate critiques. The critic’s general attitude invariably comes through regardless how we may wish to rationalize his attitude. The merit of the original critique then is subsumed to personal grievances, in many cases.

          I saw this in the HIV/AIDS debate as well as reading about Serge Lang’s effort to vote down Samuel Huntington’s membership to National Academy of Sciences. What a story that was.

        • I think the personal attacks become a thing when the researchers respond by doing the “Never Back Down” thing, also when the researchers *should have known* they were doing things very poorly for years and years.

          Wansink published papers claiming to have data that doesn’t even agree with itself in the same paper. He published research on veterans whose summary statistics could never have been correct (required him to have a bunch of 70 year olds and a bunch of 108 year olds or something like that, Jordan showed this in one of his posts)

          So, if you published “blablabla p < 0.05” in the past, and when someone points out a detailed reason why you were probably wrong, and you respond “as far as I can see this criticism of my previous research is correct and in the future I won’t do that anymore” that’s not likely to get you a personal vendetta against you. But if you demean the criticizer and claim they are out to get you and that everything published in your field is statistically indistinguishable from 100% correct, and that people in your field should go ahead and continue doing what they’ve been doing for your whole life after all it got you where you are today which is the BIG BOSS of your field and you ought to know better than these pesky methodological terrorists etc…

          then yeah, you’re committing fraud, because you *ought to know at this point that you’re wrong* all the evidence is there for you, and as a scientist you’re *obligated* to engage the information.

          It becomes even worse of course when you clearly made claims you should have known were false, or grossly misrepresented your data in order to get money. The evidence I’ve seen suggests that Wansink did that.

        • Take for example a prosecutor who takes cases to court, and consistently encourages the police not to provide him with exculpatory evidence after charges are filed because “I don’t want to know about it, because then I’d have to drop charges and if I don’t know about it then I can’t be accused of hiding exculpatory evidence, and I don’t want to blemish my tremendous record of convictions so just don’t tell me”.

          How is that substantially different in kind from a researcher who basically says “don’t tell me about why my methods aren’t ok, I don’t want to know because it will keep me from publishing more stuff and if I don’t know then I can’t be accused of knowing better and I want to publish a lot and get lots of grants”

          obviously the degree of misconduct is potentially different. The prosecutor will ruin one person’s life… the researcher may depending on the field do basically nothing, or ruin millions of lives (say by causing a certain drug to be marketed that actually causes harm) so it depends a lot on the individual circumstances. But the basic ethical error is the same. Even if you don’t know something is wrong, if you *should have known, and/or actively avoided finding out, or made no real effort to do a good job* then you’ve committed pretty egregious misconduct.

        • Joe:

          I don’t see why you’re talking about people “going after” a researcher. What’s happening is this:
          1. Researcher A publishes a paper.
          2. Researcher, or research consumer, B, reads this paper and sees problems.
          I used to be a young researcher myself, and when I was young I wanted people to read my papers and point out errors in them. I still want that! When someone reads my paper and points out mistakes, or ambiguities, they’re not “going after me,” they’re doing me a favor.

          Regarding “the conflation of poor statistical practice with fraud,” see our discussion of Clarke’s Law.

          A big problem I see is when researchers’ work is criticized on scientific grounds, and then they respond either by ignoring the criticism, brushing it aside, or treating it as personal. In the case of Wansink, many people, including Anaya, posted long and detailed scientific criticisms which Wansink and his collaborators either ignored or did not take seriously, sometimes replying in misleading ways and revealing a major disconnect between their data collection and what they published in their paper. These researchers were getting millions of dollars in public funds. So at that point I think it’s completely reasonable for Anaya and others to get annoyed.

        • Andrew,

          Correct me if I’m mistaken Joe. Joe is simply suggesting that personal stuff gets in the mix. Wansink is one extreme case. Each case of such conduct is a different story and consists of different actors.

        • Sameera:

          Yes, it depends on the individual case. It’s not always clear where to draw the line. In Wansink’s case, there were criticisms that were purely about the published papers (the hundreds of errors in the published work, the misrepresentation of data and research methods, etc.), and there were also criticisms of Wansink’s research methods, as in his blog post he wrote that he went from one hypothesis to another until he could find something he could publish. If someone says that Wansink uses a flawed research method, or that Wansink ducks legitimate criticism, or that Wansink repeatedly misrepresents his data, or that he has published dozens of papers with errors . . . are these personal criticisms? At some level, maybe so; recall Clarke’s Law. But in any case I think it is legitimate and appropriate for us to write about these things. Indeed, as scientists we’re not doing our job if we let this sort of work get published, publicized, funded, affect policy, etc., and don’t speak up.

        • Andrew,

          I doubt that you can surmise that I would disagree with anything you have posted. I am simply suggesting that some can have perfectly legitimate criticism yet subsets will undermine their merit by demonstrating that they are not beyond personal attacks. What I’m pointing to is the broader culture trends that feed into the nastiness that almost invariably enters into these situations. Especially on social media.

          I was responding to this paragraph specifically.

          ‘But what obligation does this create with respect to the past? Issue a correction or retraction? On what grounds exactly? What would it say? “In retrospect, Researcher X tested a few specifications and believes that, if the results had come out differently, s/he would have looked at more specifications or adopted different measures”? The fact that there were methodological flaws doesn’t mean the conclusions are *wrong*, it just means that the support for them is weaker than we might have thought.’

          These questions are probably not uppermost on the minds of critics probably. They are though thoughtful.

        • Sameera:

          I do think I get your undermine their merit point but

          1. > The fact that there were methodological flaws doesn’t mean the conclusions are *wrong*, it just means that the support for them is weaker than we might have thought.’
          Andrew and others have repeatedly tried to emphasize that very point – the conclusion may well be correct it’s the degree of empirical support that we are arguing is being over stated.

          2. ‘But what obligation does this create with respect to the past?
          Simply reconsider it in light of today’s view of the empirical support it actually has (which almost always includes some other studies) as well as theoretical support. Here, many I am guessing would not have enough of both to make replication good from an economy of research perspective.

          Part of the problem with 2, is the strange but persistent superstition that inference can be done in a single isolated study/paper and appraised as right or wrong. More simply, any claim that we have adequate evidence of any important finding in a single paper (except in rare cases like atomic explosions being bad) is wrong – period.

        • “Part of the problem with 2, is the strange but persistent superstition that inference can be done in a single isolated study/paper and appraised as right or wrong. More simply, any claim that we have adequate evidence of any important finding in a single paper (except in rare cases like atomic explosions being bad) is wrong – period.”

          +1

        • “Another way of looking at it, from the perspective of a more junior, more vulnerable researcher is: “If they can go after someone as powerful as Wansink or Cuddy, what’s to stop them from going after me?” Everyone’s research has flaws. Everyone’s. If you’re untenured, an attack on your work (which let’s assume was done with the best of intentions) could mean you lose your job.”

          This sounds like what I’ve heard called “catastrophizing” — focusing on an extreme possible outcome. This is generally considered not to be a rational way of thinking.

          Here is a more positive, constructive way of looking at the situation: A junior person can say, “I’m so lucky that these questionable research practices have been pointed out, so I can train myself not to make them, and to develop the integrity to acknowledge my mistakes when I do make them.” In fact, Dana Carney (a co-author of Amy Cuddy’s original power pose paper) has acknowledged the mistakes in that paper. She has gained respect for it; no one has “gone after” her.

          “I don’t, however, want that attention directed at *me* or the kind of attacks that suggest I did something wrong (in an ethical rather than statistical sense).”

          The more I learned about statistics, the more I realized that ethics are too closely intertwined with statistics to separate them. (Maybe this has something to do with the fact that I didn’t start learning statistics until I was in my fifties.)

        • > the more I realized that ethics are too closely intertwined with statistics to separate them
          I think that is a good point.

          From earlier material from Peirce “He took ethics to be the topic of deliberate controlled acting to achieve desired goals (the ultimate goal being reasonableness). He took logic to be the topic of deliberate controlled thinking or representation (which entails unending re-representation).”1

          Statistics does (must?) involve acting in addition to representing – actually repeatedly doing studies adequately and sharing them honestly. Without virtue – its high likely to be misleading.2

          1 http://statmodeling.stat.columbia.edu/2017/09/27/value-set-act-represent-possibly-act-upon-aesthetics-ethics-logic/
          2 http://www.stat.columbia.edu/~gelman/research/published/gelman_hennig_full_discussion.pdf

        • Keith

          Re: ‘Part of the problem with 2, is the strange but persistent superstition that inference can be done in a single isolated study/paper and appraised as right or wrong. More simply, any claim that we have adequate evidence of any important finding in a single paper (except in rare cases like atomic explosions being bad) is wrong – period.’
          —-
          Since I have rarely taken notes: I, therefore, get this ‘superstition’, to which u refer, quite well. I don’t map information as do many in my circles. I have some ingrained meta-analytic capacities according to a couple of academics. I think it b/c I have read a lot and try to reread until I understand the content. Then synthesis whenever feasible. I have had that luxury of hobby. I haven’t had the burden of proving that I am right or wrong. I don’t have to conclusively arrive at any inference. In one sense, it’s an application of the regression to the mean and perhaps useful notion of base rates that I entertain. Who knows?

          It’s actually difficult to explain to others how I think; some others do a little better job of that. I seem to require much more information to draw from than even many experts in international relations, foreign policy development, decision making, more broadly. I think novel explanations, whether undergirded empirically or not, have charismatic appeal.

        • Before I disappear for our long Victoria Day weekend, I would simply say inference is the deliberated acceptance of a representation (model) for acting in the world (a habit of action when the opportunity arises) which you don’t currently see how to make less wrong.

          For instance, many of us accept that smoking cigarettes is bad for one’s health and have a habit of avoiding opportunities to smoke (or try to or wish we could). Or, some of us accept that doing Bayesian modeling in Stan will be good for learning about the world and have a habit of trying to do that.

        • Keith,

          Have a great weekend. It’s a rainy one here in DC.

          I lean to Paul Feyerabend views sometimes. I read Farewell to Reason > although not so enamored with his writing style. As a speaker, Feyerabend was lucid. But I think that he didn’t really speak much to ‘ethics’. Nor did Ramsey to the extent that I would have hoped. It seems that Peirce may have to a greater extent than either Feyerabend or Ramsey. I am still exploring Peirce.

          https://en.wikipedia.org/wiki/Paul_Feyerabend

        • A junior person can say, “I’m so lucky that these questionable research practices have been pointed out, so I can train myself not to make them,” is a very good point.

          As a result of these public discussions about statistical misdeeds, I’ve been seeing increasing numbers of young researchers seeking to avoid making those kinds of mistakes and with the intent of supporting their research with good statistics. Further, I’ve noticed an increasing trend of reviewers of grant applications and papers demanding greater statistical support, both in terms of the analyses requested and sometimes outright suggestions that a statistician become involved. This public discussion has led to some real improvements in practice.

        • Clark: Your last paragraph is encouraging that positive change is indeed happening.

          Joe: I encourage you to jump on the bandwagon that Clark describes. You don’t have to go it alone!

        • The more I learned about statistics, the more I realized that ethics are too closely intertwined with statistics to separate them.

          Yes, this is true.

          I think this is part of our problem, honestly — that, and the fact that math stat is technically difficult, but people in so many fields depend upon it. These two features of statistics mean that the kind of criticism that is often needed can feel like a superior telling you, “You are dumb and bad.” And when it comes on a well-read (if niche) blog I imagine it feels like Ed McMahon showing up on your doorstep with a placard declaring, “YOU ARE DUMB AND BAD.”

          I’m thinking simultaneously of two different approaches to the problem of how to let people down easy: Francis Su’s version based on grace, which focuses on building a warm relationship with them first (“if my students know in their bones that I have given them a dignity that is independent of their performance, then I can have honest conversations with them about their performance”); and Erving Goffman’s riff on con artists, in his article “On cooling the mark out: Some aspects of adaptation to failure,” which is less a how-to and more a series of observations (or at least, plausible claims).

        • Erin:

          Let me repeat that many researchers’ response to gentle criticism is to either ignore it or else to ratchet up the antagonism. People were been gently, warmly, cordially telling Brian Wansink and his colleagues about problems with his research for more than half a decade, and Wansink and his colleagues ignored or brushed aside these important points. And he’s not the only one. Anderson and Ones, Hauser, etc., were flat-out rude to the people who went to the trouble of pointing out problems in their work.

          To put it another way: If you or others want to go to the trouble of building a personal relationship with some complete stranger who has published and publicized mediocre research, that’s fine; it’s a great thing to do. But I can’t see this “warm relationship” thing being a general practice. Research is published and put online and gets thousands of readers. If some researchers really can’t handle it when outsiders points out mistakes, maybe it would be better for them to promote their work in a more private way.

          To put it yet another way: I don’t think the wait-on-sharing-criticisms-of-published-work-until-you-have-a-warm-relationship policy is scalable.

        • We’ve gone rounds about this before and I understand your point now about this needing to be done in public. So I agree with you in the main (what, you don’t want to add to the unpaid labor of your blog the unpaid labor of making nice with thousands of researchers with problems? :) ). I’m just saying that even as I see that you are right, I think I am also right that there is a human problem here that can’t be wished or moralized away. Scientists should be other than they are, but they are people first and as a wise man said, people are a problem.

          Maybe the answer is that there is not an answer. Maybe this set of truths – science is hard and people doing it make many errors, science is a public good and so there must be a public audit trail, scientists are people and people resist admitting fault on a large stage – is what we are stuck with and we will just have to muddle through.

          I do try to live the philosophy of grace in my own collaborations. Responses to manuscript drafts sometimes take me a very long time. Whether my approach helps, I guess I can’t say.

        • Or! Or maybe if the general culture in academia were less … prideful? then it would become easier to criticize people without ruffling so many feathers.

          I don’t know if prideful is the right word. I mean that bearing that it feels like faculty are supposed to assume as part of the social role they inhabit, the one that conveys to the world that They Know Stuff and that there is a good reason they have a fancy title etc etc.

          I mean, not that this sort of cultural shift is any easier. But it would be an alternate route to that state of grace that Su talks about, one that would not put the onus on the critics to create it.

        • Erin said:
          “I don’t know if prideful is the right word. I mean that bearing that it feels like faculty are supposed to assume as part of the social role they inhabit, the one that conveys to the world that They Know Stuff and that there is a good reason they have a fancy title etc etc.

          I mean, not that this sort of cultural shift is any easier. But it would be an alternate route to that state of grace that Su talks about, one that would not put the onus on the critics to create it.”

          I suggest that we (faculty, scientists, etc.) need to cultivate an attitude of humility — that we don’t know it all. And statistics, properly understood, does that, because statistics (properly understood) is based on the premise that there is (almost) always uncertainty.

          (See p. 8 and what follows in http://www.ma.utexas.edu/users/mks/CommonMistakes2016/WorkshopSlidesDay1_2016.pdf for my attempt at emphasizing the role of uncertainty and trying to instill a sense of humility in understanding statistics.)

        • Forgot to say, thanks for these links – they have been good reading today.

          From the Allison paper:

          In 1975, Paul Feyerabend expressed concerns over the increase in publications without a concomitant increase in knowledge by remarking, “Most scientists today are devoid of ideas, full of fear, intent on producing some paltry result so that they can contribute to the flood of inane papers that now constitutes ‘scientific progress’ in many areas” (50).

          I traced that citation back. The cited Feyerabend paper is, er, interesting (link to a paywall-free reproduction). My favorite part is Feyerabend’s approving comments about telekinesis. Paging Dr. Bem!

        • Erin:

          I’m suspicious of any reference that criticizes results for being “paltry.” At least when it comes to social science, I think that the tabloids (Science, Nature, PNAS) already put too much emphasis on findings being important.

          Lots of the bad science we’ve been discussing over the past few years would indeed be a big deal and very interesting if true. Bem’s results, for example: super-exciting if his data really supported his claims; super-boring given that they don’t.

    • “First, there’s a kind of inconsistency to it. On the one hand, issues involving the garden of forking paths, small sample sizes, noisy measurements, etc. are extraordinarily widespread (this is perhaps the key claim of the movement). In nearly every area, these were regarded as standard practices until a few years ago (and even now, many researchers accept them as standard practices). This is a claim about a systemic issue that objectively needs to be addressed. That can be approached in an entirely constructive way without calling anyone names.”

      I have heard this reasoning before, but i severely doubt it makes sense. For instance, how do we know these bad practices were really “standard practices”? Perhaps they weren’t standard practices 30 years ago, but all of a sudden researchers started using them. Perhaps those that saw a problem with them, refused to “play along” and left academia. Perhaps those that tried to point all this out, got shunned and driven out of academia.

      I remember one of my fellow-students saying “i don’t want to be part of science if that means i have to stress about getting as many papers published as possible”. I myself don’t want to be part of academia anymore due to several of the things going on there that i find to be anti-scientific. My point is that perhaps for every Wansink or Cuddy, there might have been a different researcher who did things differently.

      The latter type may have not stuck around in science, for instance, because they may have not been “rewarded” with speaking fees and/or tenure due to their amazing publication record, or because they didn’t want to be part of something they thought was unscientific or unethical. Now, if journals and universities only reward the Wansink/Cuddy-type researchers, then it may seem like the way they do research was “standard practice”, that they “had no choice”, and that they “simply played by the rules” but perhaps that is a very misleading conclusion due to drawing inferences from only those cases that “made it” in the system.

    • careers and livelihoods could literally be on the line. It’s one thing (and a wholly commendable thing) to advocate better research practices. It’s another thing entirely to (implicitly) threaten a significant proportion of the people in entire fields.

      I was “driven out” (I literally could not mentally handle hearing “is it significant?” anymore) of medical research because it was overrun by people who had no idea what they are doing, and I’m sure I’m not alone. What I’m not sure about is why we should be so concerned about these people keeping their jobs. And think of the damage they are doing, its almost any time I looks something up…

      The other day I wondered about advice to wear sunscreen, avoid getting a tan, don’t let sunlight touch your newborn’s skin, etc. When looking it up I quickly discover the rates of many skin cancers have been increasing “dramatically” since these recommendations were introduced:

      Cumulative epidemiologic data from Europe [7–10], Canada [11] and the United States [12–14] indicate a continuous and dramatic increase in [melanoma] incidence during the last decades.
      […]
      During the last 30 years, the incidence of SCC [squamous cell carcinoma] has been rising 3–10% per year [29]. For the same period, it is estimated that BCC [basal cell carcinoma] incidence rate has risen between 20–80% in the US [29].

      https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5424654/

      Yet we are inundated with public health campaigns telling people to avoid sunlight and buy sunscreen and vitamin D pills as if its an established fact that this is a good idea.

      • In principle I’m with you, but in practice I worry that we’ve gotten to a place where the majority of an entire generation of scientists are in the position of having devoted their whole life to doing fake science without really even knowing it.

        Is it possible for science as a social endeavor to recover from that and turn hard towards actual science again? I am not so sure.

        to be clear, I don’t think it’s every single scientist, but I do think the majority of people called scientists these days are doing something nonscientific, and it’s not just NHST to blame, people with PhDs in their postdocs can’t even explain during group meetings what kinds of controls would be needed in their experiments in order to conclude something useful…

        • Here’s a quote from an opinion in a personal injury case handed down just last week:

          “However, the standard is not whether a study is medically significant but whether it is statistically significant.”

          Hopefully some day Wikipedia will have a page for “The Oracle of NHST” where readers will be regaled with stories of its temple priests and priestesses along with speculation about whether gases they produced were the actual cause of the hallucinations reported by visitors.

        • I’ve commented about this before on here.

          I think we are at the point where NIH and associates is largely a jobs program where anything useful being produced is considered a bonus. I expect that fixing the issue may take something akin to the reformation where the current “scientific” organizations play the role of the catholic church in that they will continue to survive but become followers rather than leaders.

          Even already for any health issue a forum of people sharing anecdotes mixed in with the trolls, shills, etc is nearing parity with checking the medical literature. The literature never seems to have any answer that people can do something with (eg “how do I stop that side stitch I get while running?”, “what is a normal range of intraday weight variability?”, “why am I getting zits after quitting smoking, how long does it usually last?”, etc) but are totally experts in “this (invisible/untouchable to you) molecule interacts with that (invisible/untouchable to you) molecule which leads to [problem] (p < 0.05)”.

          I just hope this all happens without the roving bands of murderous peasants.

        • (This question is primarily directed to both Daniel Lakeland and Anoneuoid.)

          Perhaps others (maybe myself?) might have brought this up, but if you feel that an entire generation of scientists are in the position of devoting their careers doing “fake science”, then are there ANY scientific fields where recent publications can be trusted?

    • There are at least three reasons why it is entirely appropriate to focus on prominent researchers. First, the work done by these researchers have an outsized influence on other researchers who base their own research activities on the supposedly true claims made by the like of Wansink and Cuddy (and others like Frederickson and Duckworth). How many hundreds of years of research activity have been wasted because they were built on the shaky research foundations that characterize power posing or grit or mindless eating or embodied cognition or positivity ratios or ego depletion? We don’t hear much from the researchers who appear to have wasted their time on these endeavors but they are definitely out there. Second, these researchers suck up an incredibly amount of grant money and journal space that is denied to other researchers who do not engage in the same problematic practices. Third, these researchers harm the reputation of psychology when they propose solutions to problems that ultimately do not work – and these researchers have obtained very powerful megaphones via their problematic research practices. Why should members of the public, governments, and corporations listen to anything proposed by social scientists if they keep getting burned by famous researchers like these?

    • > careers and livelihoods could literally be on the line

      Yes, the careers and livelihoods of individuals with poor research practices, who have crowded out more rigorous, careful, and deserving researchers with their flashy garbage results.

      > many other researchers wonder if they’re next

      They are.

      > the morality-play flavor of the attacks on individual researchers is deeply unhelpful.

      It’s not “mortality-play”. Science has always had an ethical component. Those who can’t follow it put themselves at risk.

      • > deeply unhelpful

        also, tacking “deeply” onto things to sound serious is the worst writing cliche of the 21st century, stop i’m begging you

      • @ Paul, I agree with you (and virtually all of the others above).

        @Joe, I think it is good to think about how this criticism affects researchers at different stages. But if that means they fear publishing or even leave the field, Good! It is not the best solution, because (I assume) the majority of researchers engaged in questionable practices actually knows a lot about the subject they are studying and often has a very good general understanding of the material but their education/training lacked a focus on proper methodology, supported by a culture that does not care about these “minor details”. Losing these people would be a great loss, but keeping them and their work in science produces far more damage. The alternative is, we clean up the mess of the past and do better in the future, together.
        From my experience of talking about science with other scientists, I can say that many (the majority maybe?) sees science as just another job. But science is not just a job, science is not about you or me having a career or be renowned, revered or admired. it is not about earning money and it is definitely not about getting publications.
        See e.g. Feynman: https://www.youtube.com/watch?v=f61KMw5zVhg (Sorry, I d not know how to make proper hyperlinks.)
        It’s about science! It is about finding the “truth” (I do not want to use even more space and I know that “truth” is loaded and we can only approximate and there might not be an “objective truth”. I hope the general meaning comes across…)

        I am a junior researcher and I am sure some of my work has flaws. If one of the data terrorists ever scoops down to take a look and finds a flaw, I will be happy to adjust and be better in the future. But if someone is not willing to do science properly, why bother doing it at all? There are better paying jobs outside academia anyway.

        Also, it never was “ok” to do the things that Wansink and others did. Methodologist wrote about it decades ago and more importantly, common sense and simple logic did not drastically change in the last decade. The majority just did not know (or care) and the people who did know and care did not get heard. It might have been the statistical norm, but it never was the ideological norm of science.

        • “But science is not just a job, science is not about you or me having a career or be renowned, revered or admired. it is not about earning money and it is definitely not about getting publications.”

          Yes! As someone said in the comments on this blog: “Science is an investment, not an entitlement” (http://statmodeling.stat.columbia.edu/2017/12/15/piranha-problem-social-psychology-behavioral-economics-button-pushing-model-science-eats/#comment-626629)

          I also worry that the narrative that seems to have been brought forward to explain all that is wrong with today’s science (e.g. the “incentives” are to blame; researchers are “only human” and “need to pay the bills”) is not really that helpful. More importantly i think it possibly leaves out the crucial parts of the possible explanation that i reason might be crucial in subsequently cleaning up the mess (e.g. who is responsible for this bad incentive-structure and why did this came to exist?)

          I looked up the word “incentives”:

          1) “a thing that motivates or encourages someone to do something.”
          2) “a payment or concession to stimulate greater output or investment.”

          Now it seems to me that the “incentives are bad and made scientists do all kinds of bad things”-narrative possibly refers to a sub-set of possible motivations, and encouragements.

          Leaving aside whether the “incentives are bad and made scientists do all kinds of bad things”-narrative is even a plausible explanation (where is the evidence to back up this reasoning again?), the possible narrow focus on “incentives” relating to payment/status/job security, to me only helps to reinforce the possibly wrong views that “science is just another job”, and that “scientists are mainly interested in money/status/whatever”.

          I tried to help improve (psychological) science because i felt some sort of responsibility because i knew about the problems, and i valued rigorous science. During that process, i even decided to no longer want to be part of academia, but still continued with my efforts. These actions are also a result of incentives, are they not?

          I feel today’s scientists might feel way too much entitlement, and way to little responsibility.

        • The way the incentives play out is that they create a survivorship bias. You can’t do science today without at least basically paying your own salary, either through grants (many biomed researchers have to pay a large component of their salary directly through their grants) or through making “having you in the university” make sense for the university so that it pays your salary. That again basically means either grants, or adding substantially to the university prestige and the ability to charge money for tuition and get high quality students etc.

          So, anyone with tenure in academia today has spent say 5 to 10 years minimum creating a “brand” that somehow enables them to pay their salaries, and/or funds for lab work etc. If the primary kind of “brand” that universities respond to is a “fake sciencey hypey overconfident, bullshit brand” then the primary content of academia today is people who survived that cutoff…

          It’s not so much that individuals see the incentives and change their own behaviors to fit it (though this happens to some extent) it’s more just that lots of people go into the filter, and the ones that come out are enriched for that particular behavior.

        • Also note that this mechanism doesn’t require the individual scientists to be primarily money-driven. It’s sufficient for the MBAs running the dean/provost/president offices to be primarily money driven, which they definitively are.

        • I actually learned in MBA school how bad those miss-managed incentives were for the real stakeholder holders.

          In fact my first paper on (the pending) replication crisis was me working through what I had learned in MBA school – http://statmodeling.stat.columbia.edu/2012/02/12/meta-analysis-game-theory-and-incentives-to-do-replicable-research/

          I would blame it on those hiring the presidents and other senior university management or those even higher up who make almost impossible for good senior management to manage universities profitably in a scientific/scholarly sense.

        • Thank you for your comment. If i understood it correctly, scientists might have to “earn their salary”, which implies to me it’s all about money. If this is correct (i find it much more plausible than universities “pressuring” their scientistst into publishing more and more), why does it seem to me that this is hardly ever mentioned in all the discussions of the past 7 years or so about “bad incentives”?

          To give an example, i believe the following 3 papers have been influential concerning psychological science’s problems and “incentives”, but i can not remember reading anything about your proposed mechanism/explanation anywhere: 1) Nosek, Motyl, & Spies (2012); 2) Smaldino & McElreath 2016); Bakker, van Dijk, & Wicherts (2012)

          1) http://journals.sagepub.com/doi/full/10.1177/1745691612459058

          2) http://rsos.royalsocietypublishing.org/content/3/9/160384

          3) http://journals.sagepub.com/doi/abs/10.1177/1745691612459060

          They all seem to be talking about how publishing lots of papers gets you a job/is the goal of scientists: that’s your “incentive” as a scientist. Perhaps lots of papers in turn gets scientists lots of grants, but i reason this should 1) be investigated if this is really the case, and 2) made much more clearer when this is the case.

          I reason if universities really only care about money (e.g. via grants that scientist receives) and not so much about the amount of published papers, than i reason possible solutions “to change the incentives” should be aimed at the money-issue not the publish-or-perish-issue.

        • Why is it not emphasized? Because those papers are written by scientists deep in the thick of the system, not say economists or game theorists looking from the outside.

          The way it goes is like this:

          The university has a given current income which is more or less allocated at the moment. In order to hire an additional researcher it must increase its income (in the long run at least).

          University interviews candidates, and chooses a candidate. Which candidate will it choose? One of the ones which seems likely to “pay his or her way” either by (1) getting grants from which both direct and overhead/indirect costs will pay for the researcher or (2) attracting more and better and higher paying students by virtue of having someone who teaches or researches something desirable to students, both undergrad and graduate, and/or attracting student or departmental training type grants.

          How does one get grants? By impressing people on grant review boards? How are grant review boards impressed? By people who push the right kinds of buttons:

          1) “innovative” research (things seen as stuff no one has ever done before, regardless if that’s true or not)
          2) “translational” research (from basic science to clinical practice)
          3) “interdisciplinary” research (building coalitions of multiple disciplines)
          4) “policy relevant research” (providing the kinds of stuff governments want to hear)
          5) “politically hot topic / sexy research” (terrorism, megacities, stem cells, smart devices, internet of things, virtual reality, high performance computing, stochastic modeling, genome sequencing, genome editing, …)
          6) “building large scale infrastructure projects” (high performance computing clusters, full-scale engineering testing labs, human stem cell research centers, earthquake prediction centers…)
          7) …. you get the picture

          Besides a choice of topic, how can you get grants? By convincing the granting agency that you’ve gotten grants in the past, and you’ve published a lot of stuff about your topic. Since most people on the grant committee won’t really be in your direct field, they will have to rely on signals to determine if you know what you’re talking about this will include:

          1) Number of papers you’ve published in the field
          2) The names of journals in which you’ve published
          3) The names of co-authors and collaborators you have worked with
          4) Whether you’ve been asked to give important talks at important meetings about your topic…
          5) Whether you qualify for special consideration as member of a disadvantaged class compared to your field: women, racial minorities, whatever.

          RESULT:

          Universities, in attempting to build a portfolio of researchers who “pay their way” and thereby maximize the money income of the University, look to the signals that the researcher sends to granting agencies, and compares them to the signals known to successfully attract funds and students and soforth, and bases initial hiring decisions on those signals, and bases tenure decisions on those signals.

          End result: only those people who successfully send the right kinds of signals make it through to the end-game of university life.

          It’s not that hard to figure out, it just takes looking at the system as a system, and seeing the forest… instead of the trees.

        • Daniel Lakeland says, “What people don’t seem to realize or be willing to believe is that: UNIVERSITIES ARE TAX ADVANTAGED HEDGE FUNDS”.

          Yes, I have come to believe this disconcerting truth. Ron Unz’s website is controversial, but his 2012 article, Paying tuition to a giant hedge fund, in reference to Harvard University (albeit undergrad primarily) might be worth glancing at, for additional expansion of Daniel’s observation.

  2. Thanks for writing this. It was especially useful to see Jordan’s extended take on it. I also wanted to add that you were correct in saying that in an earlier draft I said “disingenuous”, but I later changed this to “incomplete”, for the reasons you note – we aren’t mind readers.

    • Jordan is like a good trainer — he really pushes you, which can be painful at times, but it’s like “feeling the burn” in order to develop your muscles (or, in this case, understanding).

    • I hoped I’d see a comment from you here. I really liked your post!
      This part made me chuckle ruefully: “Frankly, I don’t think academics are very good at self-policing professional standards. I’m sure we have processes in place to deal with things like sexual harassment, fraud, and abject incompetence – but I don’t get the sense they work very well. I can’t see how it would work any better for this.”

  3. I’d like to suggest that we make a greater effort to distinguish Corrections from Retractions. Face it: retractions are stigmatized, and people associate the concept with intellectual fraud. We need to cleanly break up the two concepts, and to emphasize that Corrections are normal and part of science. Someone should not be embarrassed to have a paper where a correction is issued.

    OK, but where to draw the line? I would argue that unless there is fraud, we should go for corrections only. Yes, even with Wansink. I think the weight of many corrections would make it clear that a researcher’s work is not reliable. Because if you ask for a retraction, authors will be defensive and will refuse to acknowledge the mistakes.

    Yeah, but with the corrections there is nothing left: a non-paper paper! Yes, but that’s the point: the journal should be embarrassed to publish a non-result paper. The outcome will be, Well we published a paper that found nothing. I want journals to share the burden, otherwise authors have an incentive to continue to mine the data and publish spurious results.

      • Except, where fraudulent practices were followed, Retractions for sure: STAP cells, everything Wansink ever published, etc. We should separate that kind of behavior from “I tried hard to get the right answer, but it turns out I was wrong or made an error”

    • JAMA have an approach in place as part of their standard procedures. From Retraction Watch’s interview (published 2016-06-20) of Annette Flanagin, Executive Managing Editor of JAMA Network:

      Retraction Watch: How do you decide whether a paper will be retracted and replaced, or just retracted?

      Annette Flanagin: For articles with confirmed research misconduct (eg, fabrication, falsification), we will publish a Notice of Retraction and retract the article. However, several studies have shown that about 20% of retractions are due to some type of major error, not misconduct. And we need a mechanism to address honest pervasive error (ie, unintentional human or programmatic errors that result in the need to correct numerous data and text in the abstract, text, tables and figures, such as a coding error) without the current stigma that is associated with retraction. Thus, for articles with honest pervasive error, in which the corrections needed to address the errors result in statistically significant changes to the findings, interpretations, or conclusions and for which the methodology/science is still valid, we will consider publication of a Notice of Retraction and Replacement.

      ———-

      September 2017: JAMA Pediatrics issues a Notice of Retraction and Replacement for “Can Branding Improve School Lunches?” by Wansink et al.

      ———-

      October 2017: JAMA Pediatrics issues a Notice of Retraction for the replacement version of “Can Branding Improve School Lunches?”

      • “in which the corrections needed to address the errors result in statistically significant changes to the findings, interpretations, or conclusions..”

        … you know they are just gonna p-hack their theory until their interpretation of Beta becomes significant.

        #WheresPointerOutGuyWhenYouNeedHim #FreePointerOutGuy

    • I agree that “Someone should not be embarrassed to have a paper where a correction is issued.” But I also believe that we need to accept that honest mistakes sometimes get into publication, so that retractions are not always the result of fraud. So I think a “retraction because of errors” needs to have a place.

  4. Reading the Allison et al article I couldn’t help but relish in the unintended irony that flowed from their definition and use of the word “error”. The probable error of a mean indeed. Some say the English language has too many words but I say it has too few. One more in particular would be welcome. It would convey all at once the frustration, dismay and alarm that follows when some in the crowd begin defending the Emperor’s new clothes (and his tailors) after it becomes obvious that he isn’t wearing any at all.

  5. Hi Andrew,

    Regarding “Eureka bias”, there is an interesting meta-analysis from Nowbar, et al. 2014 which analyzed the relationship
    between discrepancies in publications and effect size in stem cells research. The paper is here:

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4002982/

    Discrepancies are defined as:

    1. Discrepancies in the design—for example, conflicting statements as to whether the study was randomized.

    2. Discrepancies in methods and baseline characteristics—for example, sample or subgroup sizes that could not be an integer number of patients.

    3. Discrepancies in results—for example, conflicts between tables and figures or impossible values.

    I have analyzed Nowbar, et al. 2014 as a Bayesian exercise in Verde (2017):
    https://www.intechopen.com/books/bayesian-inference/two-examples-of-bayesian-evidence-synthesis-with-the-hierarchical-meta-regression-approach

    The Nowbar’s meta-analysis is transparent (e.g. names of studies’ authors, year of publication, etc.). What it comes up is that in a particular area of medicine (stem cells treatments), where there is a conflictive evidence, the higher the quality of the research the lower the effect size. A relevant message for patients trying to get a better health treatment.

  6. If one is criticized the best strategy is to stick to the merits; otherwise one gives an opening to sideline the meritorious claims in short order.

    That said, I try to resist the self righteousness that goes along with criticizing or critiquing others. And won’t contemplate critique unless I have something useful to add.

    I do think the backdoor sniping that ensues is just part of all cultures. In academia it has been pronounced, I admit. But I have also come across some totally wonderful personalities. Here people skills do matter. The ability to appreciate others’ good points. Contributions, etc. In many circles, I have witnessed broader tendency to complain, a view I shared with Robert Hughes, author of the Complaint Culture. A habit to catalogue grievance too. Such are the habits of mind that overshadow any specific legitimate critique effort, in turn undermining its merits.

  7. I see the narrative of the “harsh but truthful scientists-error correctors coming from the slums” attacking those “ivy league/otherwise powerful scientists-error makers/unwilling to concede their errors” repeated enough times to be setting in.

    What i actually see is a more or less large majority of researchers trying to do good work and small minority of researchers which go for the *lucrative conclusion* be it an overhyped pseudo-finding or standing on the face of a sufficiently famous individual to make a name for yourself (No Andrew I don’t mean you at all).

    Anyhow trash talk is what it is — trash talk — and mostly serves the above purpose or some other weird satisfaction i fail to see its role in bettering science.

  8. Andrew, as far as I know, the problem with Jordan’s article was hardly the title. Allison specifically pointed this statement out as being problematic in his talk, where Jordan said in the medium article,

    “It wasn’t that long ago I was a PhD student sitting in a room full of inept professors with pompous, superfluous titles who ended my academic career, and now here I am returning the favor.”

    This makes the entire thing seem very very personal. As if he’s using his past experience and has a large grudge against academia as a whole and is out for blood. While I’m conflicted on the idea of censuring any individual for engaging in ad-hominem attacks (which seem a bit difficult to define), I can see why Allison and Brown wrote what they did having interpreted that first statement of the article.

    • What does this read like? Jordan is out to end the career of the subject of the article. I know he wasn’t and it may have just been a statement he put out there to describe his frustrations with academia, but it does *read* like that, if you haven’t read any of Jordan’s other works.

      • Yes, I initially had a largely negative view of Jordan, but have come to see him in the “good trainer who works you hard” analogy that I described above. Tactful he ain’t, but he makes his points well.

    • Doesn’t bother me at all if Jordan ends the career of Wansink. Would it bother you if someone detected that a judge had been taking bribes or consistently prejudiced against people with certain ethnic backgrounds, or the like, and then the person who discovered that set out to end that judges career? Everything uncovered about Wansink shows that he’s in the same kind of camp: he willingly subverted the public trust, scientific knowledge, and millions of dollars of grant money through intentional bad actions which have been known to be unscientific for decades. The evidence seems pretty strong.

      • Daniel, this is a false equivalence, but I likely understand what you are trying to say. Regardless, no one should be out to end anyone’s career. I don’t think Jordan was out to do this at all. We should be attempting to correct people’s mistakes in the hopes of improving the process overall, not trying to end their careers.

        • No, I don’t think you understand what I’m trying to say, I’m trying to say that if the things that have come out about Wansink are really true, then we *should* be trying to end his career, and probably prosecuting him for fraud and trying to recover damages on behalf of the people of the united states.

          Based on what I’ve read about his public record of publications, he seems to consistently report that he’s performed experiments leading to results which could never have happened. If it’s true he’s a con man who conned granting agencies acting on behalf of the government of the united states to distribute funds which were supposed to fund science.

          It’s not a false equivalence at all.

        • Let us also remember that one person has an academic career at times at the expense of another. Person X gets the job; person Y does not and withers on the vine without funds, stuck, or even out of the academy. What if it’s ending a career that should have never begun? Or ending a career that could have — should have — gone to someone else?

        • It’s worse than that because false-science rent-seeking behavior seems to garner more funds than a typical careful researcher, and so each of these rent-seekers displaces more than 1 alternative good researcher, and also tends to move fields in unproductive directions… further wasting resources. It wouldn’t surprise me if Wansink’s behavior was responsible for wasting resources equivalent to 5 to 10 good researchers all told.

        • From my point of view it is even worse because this crappy research may from the basis for very critical policy decisions. Do you really want your govenment making billion dollar policy decisions than can kill people or devastate lands based on fallacious results.

          Researchers sometimes forget that their paper(s) may buttress the policy recommendations that go to the prime minister or president’s desk.

        • I don’t agree with Daniel that Wansink “willingly subverted the public trust, scientific knowledge, and millions of dollars of grant money through intentional bad actions which have been known to be unscientific for decades”; I think if him more as a schlemiel who got in over his head and didn’t really know what he was doing.
          But I do agree that he has been such a bull-in-the-china-shop that it is ethical to try to “end his career” in order to prevent him from doing further damage.

      • I don’t necessarily take issue with trying to have someone removed from their post. However, in this particular circumstance I believe Jordan and others who are trying to bring scientific flaws to light would be better served by not explicitly stating that motivation in such a tactless manner.

        If you are criticizing someone or a group of people (in this case the people doing research with the same tools as Was, which is a lot of people) and you want them/others to listen to you, you are better not to concede the moral high ground. As soon as they can point out that you are out for blood (explicitly stated) it allows them to say “hey…this cat is just out for blood. It’s unprofessional. No need to listen to this guy.”

        In short, in this circumstance the explicitly stated personal motivations for the scientific criticism significantly detracts, in my opinion, from the ability of that criticism to be heard by the parties one wishes it to be heard from. And that is a shame when the science is on one’s side.

        • Allan:

          I disagree.

          First, I am a proponent of openness, and if Anaya wants to be open about his motivations, I think that’s great. One good thing about Wansink is that, although he and his colleagues have over and over again misrepresented their data and research methods, he (Wansink) has been pretty open about his motivations: he wants to do cool stuff, he wants to be on TV, etc. I’d like to see more of that. The original blog by Wansink that got all this discussion going was admirably open about how he organized his projects and research team.

          Second, I feel like this is a game that we (the critics) can never win. The trouble is not so much in the words as in the content. What we’re saying is that Wansink published bad papers in which the claims were not supported by the data. There’s no great way to sugar-coat this message. Indeed, in many situations it seems that researchers go out of their way to take personal offense at non-personal criticism. Or else, as in Wansink’s case, they simply dodge and avoid straight scientific criticism for years on end.

          More generally, see this discussion of the Javert paradox. In short: all of this takes work, and people need some motivation. If it’s not money or fame, it might just be annoyance at seeing bad work that gets millions of dollars of government funding and is regularly featured in the news media. I don’t see how Anaya’s annoyance is any less legitimate that Wansink’s desire for fame

          That said, sure, I think there’s room for strategy in how to present one’s arguments.

        • Allan:

          Its also _easy_ to cause offence which might only be realised afterwards.

          The first sentence here apparently caused offence http://statmodeling.stat.columbia.edu/2017/08/31/make-reported-statistical-analysis-summaries-hear-no-distinction-see-no-ensembles-speak-no-non-random-error/#comment-559330

          The “I find unconvincing if not even wrong headed” did not seem that offensive when I wrote.

          Additionally, with Anaya’s disclosure of his past – if he did not disclose that and someone else brought it up – wouldn’t that likely be worse?

        • Keith, Andrew,

          I agree almost entirely with both of your statements. However, there is some nuance I think both of you are missing.
          If the intent of the criticism is to effect change (e.g. have Was removed from his post) then one is not trying to convince those that care not about presentation but rather substance (i.e. some/most readers of this blog). One is trying to convince people who put that person there in the first place (who likely not of the same vintage).

          In my experience, convincing people that have been doing things wrong for a long period of time (e.g. Was, his colleagues, etc.) or those who have facilitated such things (e.g. funding agencies, journals, his institution, etc.) – which are entirely the people that need convincing if one actually desires to remove Was from his post – becomes a much harder sale when they can claim the moral high ground and use that as an automatic reason to dismiss potentially valid claims without evaluating their merit.

          In this particular case, concession of that moral high ground was given the moment Jordan sated he was explicitly out to end Was’s career. It’s that explicit statement of motivation that forms the entirely unnecessary roadblock. It is a less burdensome task to get people over the following: To be a prominent scientist worthy of funding one must not make a bundle of errors -> Was can be shown to make said bundle of errors over, and over again. Some errors seem more like fraudulent practice then errors, but it’s not definitive from the data available -> Was should not get funding / be considered such a great researcher (due to initial premise).

          If one agrees with the initial premise (which is virtually unobjectionable) and then objectively evaluates the evidence, the conclusion, though messy/hard to swallow, is certainly true and has a much better chance of being recognized as such.

          To be crystal clear, it is not the fact that Jordan and others believe Was should be removed from his post or their statement of this belief; it is their explicitly stated motivation that allows the very people they desire to convince an automatic reason not to objectively evaluate the evidence that never allows them to follow to the logical conclusion. And it’s this counterproductive tactless way of writing that I take issue with.

          To address Keith directly or a slightly different but related note: many will take offence with what we say and that is perfectly okay. To have productive discourse is by its very nature to offend. It’s okay to say that Was should not be considered a prominent researcher; but this statement, in my opinion, is more likely to be heard by others if it is arrived at by the logical inference/chain described above then after “I want to end this man’s career.” [not an exact quote, made up by me at this late hour]

          And absolutely, disclose everything! But with tact. As an example, “I was run out of academia by these pricks (…poor scientists)” [not Jordan’s words, a made of quote) is a hell of a lot worse then “I started a career in academia and after a brief period I had exited the practice altogether. The standard practice of some, or in my experience, most researchers was below a level of care that I would consider fit for those with such responsibility. Below describes an example of one researcher whose practices I believe to be of similarly low quality. Here is why….”

        • “And absolutely, disclose everything! But with tact. As an example, “I was run out of academia by these pricks (…poor scientists)” [not Jordan’s words, a made of quote) is a hell of a lot worse then “I started a career in academia and after a brief period I had exited the practice altogether. The standard practice of some, or in my experience, most researchers was below a level of care that I would consider fit for those with such responsibility. Below describes an example of one researcher whose practices I believe to be of similarly low quality. Here is why….””

          I am both a fan of not beating around the bush and trying to be civil, but there are limits to the latter. Calling someone a “prick” is not that bad when you can back it up with reasons for calling him that which are presented before or after naming him that.

          I also think that there might be a “danger” trying to be as “polite” and “gentle” as possible. For instance, why is “publication bias” not called something like “hiding results”. Or why are “questionable research practices” like “HARK-ing” or omitting analyses/conditions/etc. not simply called “lying” or “cheating”? And if you could just as well describe them using the more “harsh” terms, would that matter/have mattered concerning solving these problematic issues in science?

          To me, the whole “tone-debate” (and the possibly related useage of “gentle” terms for very problematic issues) might be a way to 1) shut people up and/or limit the impact of their words, and 2) severely mis-represent the severity of the problematic issue.

          (There must have been some research done into the possible effects of using “gentle” terms/words instead of “harsh” words to describe something “bad”, and what the effects of that in turn could be)

        • Anon:

          One reason I prefer to talk about “forking paths” rather than “cheating” is because I think lots of researchers do these questionable practices without realizing the problem. Mathematically it may be indistinguishable from cheating (Clarke’s Law) but it feels different from the perspective of the researcher. So terms such as “forking paths,” “harking,” “qrp,” etc., seem to me to be more accurate descriptions in many cases. And these terms are general enough that they do include cheating as a special case.

          Kind of like in a running race, if a runner keeps jumping the gun. Is he cheating, or does he just have bad habits? I don’t know, but in any case he’s jumping the gun so his times are wrong.

        • “One reason I prefer to talk about “forking paths” rather than “cheating” is because I think lots of researchers do these questionable practices without realizing the problem”

          I think i understand that. I also think a lot of scientists should have known these practices are wrong, and i think a lot did know. I mean, if you do research and only publish your “significant” results, at what point in time do (should) you realize that this might not be a very scientific way of doing science?

          And, how come i had discussions with my fellow-student about it being dishonest leaving out certain (non-significant) conditions of the experiment in her to be submitted paper (as i think her advisor said she should do) without ever hearing anything about that practice being right/wrong during my education?

          Regardless of the possible individual responsibilty and accountability of researchers concerning questionable research practices, my point could still be valid: perhaps the possible fact that (some of) these researchers don’t realize the problems might have to do with academia/science’s possible habit of using fancy/”gentle” words to describe something much more problematic/severe.

          If i remember correctly, someone on a psychological science methods discussion group said something like “questionable research practices are only called that because calling it fraud would imply the majority of psychological science is fraudulent”.

        • Anon:

          It’s hard to know. I think there’s a feedback loop, or vicious cycle, that goes like this:

          1. Researcher uses bad methods (noisy data, sloppy data processing, forking paths, etc.), obtains “p less than 0.05,” and comes up with a scientific story.

          2. Paper is published in a top journal. Success!

          3. Researcher continues on this path, trains students to do this sort of work, gives invited addresses at conferences, etc.

          4. Return to step 1!

          You ask, “at what point in time do (should) you realize that this might not be a very scientific way of doing science?” The answer may well be “never.” The researcher might never hear such criticism at all, or, if this criticism is heard, there will always be authority figures such as Brian Wansink or Susan Fiske or Daniel Gilbert to assure the researcher that everything is just fine, ignore the haters, etc.

          This is one of the saddest parts of the story, to me, that there are young people, just beginning their careers, who have all sorts of opportunities and could potentially do real science—but they’re being egged on and misled by stakeholders in the old, bad system. Sure, at some level these young researchers must know that they’re doing something wrong, but it’s easy to just listen to the authority figures who are giving them the comfortable advice to keep doing what they’re doing.

        • Allan:

          I agree that tact is a good thing. But there are three arguments against too much tact:

          1. Tactful criticism can be ignored. Wansink ignored tactful criticism for many years, so do lots of other researchers. Often it seems that making noise is the only way to make a change.

          2. Strategy is fine, but honesty has a value in itself. Sure, Anaya being open about his motivations may leave him open to some irrelevant criticism, but on the plus side, by being open, Anaya is involving all of us in his thought processes. I prefer this to a world in which people are strategizing behind the scenes and then not fully representing their views and motivations in public. To put it another way, “politics” may sometimes be necessary, but I think it’s a negative-sum process, and I’d really like to see more openness.

          But really my key point is #1. What’s happened with Anaya is what’s happened with me and others, many many times: You hear about some work that’s been heavily and uncritically promoted, it turns out that it has fatal flaws, you present direct, detailed, factual criticism, which is either completely ignored or met with polite evasion, and then you can choose to (a) let it go, or (b) make a fuss.

          Letting it go is fine, but I think we all owe some thanks to the people who make a fuss. The cost in time, reputation, etc., of making a fuss is high enough, compared to the meager benefits, that there’s an incentive to let it go. Defenders of bad work know there’s an incentive for critics to let it go, hence they (the defenders of bad work) know it can be smart strategy to stonewall. That’s game theory for ya. If there were no “Jordan Anayas” and “Nick Browns” out there, willing to push through anyway, we’d get nowhere.

        • Allan C

          Thanks for your thoughtful post.

          Your arguments make sense to me but I have been (well?) advised that if one really, really wants to get others to do something, one should hire a lawyer or some other professional trained in and experienced at doing exactly that. They _should_ know how to best proceed – so you get what you want.

          I think I am mostly trying to enable others to see how to do better research in the future in their work – that will include the party being criticized as they are one other but there are many more others. And those more others are going to be much more open to actually grasping and carefully considering the arguments being put forward.

          Additionally, more often that not, when I have tried to be more polite in making criticisms I have regretted it mostly for reasons that Andrew gives – it’s more likely to be ignored.

    • I continue to suggest that ’emotional blackmail’ has been prevalent in nearly all societies. And
      I began to pay attention to it back in 90’s b/c there is already a lot of propensity to psychoanalyze others. If you are empathetic person, you are likely to listen more than judge.

      I suppose that one can conduct meritorious queries but driven also by past experiences which then skew how you will cast them. I base this on my watching psychologists in Boston interact with and talk about their colleagues & friends. All in all we are way too judgmental.

      • Don’t get me wrong, I think there shouldn’t be a one size fits all strategy to try and change behavior. My only worry is that a harsher intervention might work for Wansink but at the expense of possibly scaring the rest of community further away. Wansink’s behavior is important but I am more concerned with the average researcher. To be honest I have no idea how these strategies will effect overall behavior but my feelings on the subject is that people generally mean well and constructive criticism and avenues where researchers can save face for their errors could go a long way.

        • Pat:

          It’s my impression that there’s been a lot of constructive criticism of Wansink and others. Unfortunately, Wansink is not unique in responding to constructive criticism in a nonconstructive way. I hope the lesson that others can draw from the Wansink saga is that external criticism can be valuable and that it is not a good idea to try to brush it off.

          Regarding your last sentence: Wansink and others like him have had a million opportunities to save face. When the 150 errors were found in his four published papers, Wansink could’ve admitted right then that he’d screwed up and he could’ve gone back and carefully looked over his old papers. He did not do that. Instead he pretended there was nothing wrong, and continued to hype his work as he’d done before. Even someone like the disgraced primatologist Marc Hauser could’ve saved face at many different points by apologizing to his research assistants and admitting he’d been over-zealous in interpreting his data. Etc. If “saving face” is defined as keeping all your unsupported published claims and not losing any points of reputation, then, no, saving face in such a setting is not possible, nor should it be. But if saving face is defined as keeping your dignity and preserving a reputation as a member of the scientific community, then, yes, it is possible to admit error, and critics have availed researchers of many many opportunities to do this.

        • Yes not a fan of the first definition of saving face either. And I am not that concerned about getting Wansink to change his behavior, as some could be too far gone. You can’t physically stop him from promoting his work; I am guessing that all you could do is influence the environment in which he states his conclusions. Now what would be the best strategy to handle this problem giving these conditions? I don’t know but I would try to focus on the system that created the Wansinks of the world and not necessarily focus on punishing him. Not saying you should be quiet but going too negative doesn’t seem like it will work. And by work I mean the system changing not Wansink.

        • “I would try to focus on the system that created the Wansinks of the world and not necessarily focus on punishing him. ”

          I don’t see much (if anything) here on punishing Wansick — but a lot on trying to improve the system that created him.

        • The responsibility for holding Wansink to account for his work is with his institution, Cornell University. That is process, via an IRB inquiry that concluded with a 160-page report of findings and requirements. One of the latter requires Wansink to engage a team of independent statistical consultants to vet his work, prior to publishing any research. Wansink is a full professor at Cornell. I have no idea how Cornell could, um, get rid of him, as director of the Food and Brand lab, nor his tenured position. I do know, from speaking with Brian recently, that he doesn’t plan on publishing any research articles anytime soon.

          @Pat – future harm due to Wansink and his (lack of proper) research methodology is probably minimal at this point. Now, the emphasis needs to be on what @Martha said, i.e. on trying to improve the system that created him.

          Andrew’s comment about the necessity of researchers acknowledging error in response to critics is vital. Doing so *SHOULD* make it possible to keep one’s dignity and reputation as a member of the scientific community. Institutions and peers can make the mea culpa process less humiliating. Loss of funding is a concern, but loss of institutional reputation encompasses funding and more. Such a scenario’s likelihood increases by allowing the situation to persist, unchecked, year after year, even if it hasn’t yet been “caught” by external parties.

  9. To point out the obvious, which is why it has likely gone unmentioned until now, isn’t comparing someone to Trump the same thing as calling them a liar?

    • P.S. Regarding the “liar” thing: It’s clear that Trump and Wansink have both said or written many untrue things that, at least in the short term, have seemed to benefit them personally. Are they “liars”? I don’t know. To lie is to knowingly tell an untruth, and many times it’s been argued that a person who’s telling an untruth is not really lying because he thinks the statement is true, or because he’s so clueless that he doesn’t know what’s true and what isn’t. Sometimes I think it’s not so productive to talk about whether someone’s lying, and it’s better to just focus on the statements being untrue and the processes that allow such untrue statements to be promulgated. In Wansink’s case, these processes include the peer-review system, followed by the publicity system in which news media take false claims in published papers and repeat them unquestioningly as if they were true.

  10. A great example of a very clear ad hominem attack is a guestpost by Virginia Barbour about me at https://retractionwatch.com/2017/03/23/agreed-listen-complaint-paper-harassment-began/

    This guestpost refers to my efforts to retract a fraudulent study on the breeding biology of the Basra Reed Warbler (Al-Sheikhly et al, 2013, 2015) in a Taylor & Francis journal. This guestpost was edited by Alison McCook. Alison McCook has never contacted me for comments on this guestpost about me by Virginia Barbour. Alison McCook has never responded on an e-mail of 7 June 2017 (‘proposal for a posting at Retraction Watch about our efforts to retract the fraudulent paper on the Basra Reed Warbler in a Taylor & Francis journal’). Virginia Barbour was at that time chair of COPE. She was at that time affiliated to the University of Queensland (UQ), the Queensland University of Technology (QUT) and Griffith University (GU), all located in Australia.

    Backgrounds and details about our efforts to retract this fraudulent study on the breeding biology of the Basra Reed Warbler can be found at https://osf.io/5pnk7/ and at https://www.researchgate.net/project/Retracting-fraudulent-articles-on-the-breeding-biology-of-the-Basra-Reed-Warbler-Acrocephalus-griseldis and at https://pubpeer.com/publications/CBDA623DED06FB48B659B631BA69E7 and at https://www.birdforum.net/showthread.php?t=303435

    Complaints were filed to COPE when it turned out that the publisher (TF) was unwilling to work together with us to retract the fraudulent study. These complaints were filed in the beginning of July 2015.

    Details about the processing of these complaints by COPE are part of the paper “Is partial behaviour a plausible explanation for the unavailability of the ICMJE disclosure form of an author in a BMJ journal?” at https://riviste.unimi.it/index.php/roars/article/view/9073 : “COPE informed me on 26 July 2015 to start with processing the complaints. TF would be requested for comments and I would be copied in this correspondence. COPE told me on 4 August 2015 to act as a facilitator of a dialogue with TF. COPE informed me on 13 July 2016 that the processing of the complaints was terminated and that questions would not be answered. The correspondence was never received and there is until now no dialogue with TF.”

    See for backgrounds also https://forbetterscience.com/2015/10/31/join-the-committee-ignore-publication-ethics/

    We had in the meanwhile prepared a report “Final investigation on serious allegations of fabricated and/or falsified data in Al-Sheikhly et al. (2013, 2015)” (at https://osf.io/vbdw8/ ). This report is since 1 July 2016 in the possession of all stakeholders (so also COPE, publisher TF and the EiC). It is stated at https://osf.io/5pnk7/ : “until now no one has been able to rebut / refute any of the conclusions of this report. Herculean efforts to get reviews / comments from experts (within this field of research) who rebut / refute any of the findings of this report were unsuccessful. There are thus no experts (within this field of research) who rebut / refute any of the findings / conclusions of this report”.

    This is still the case.

    The guestpost by Virginia Barbour at https://retractionwatch.com/2017/03/23/agreed-listen-complaint-paper-harassment-began/ is thus highly remarkable.

    (1): this guestpost does not contain any details about our repeated requests to get access to the full set of raw research data of the fraudulent study;
    (2): this guestpost does not contain any details about our repeated requests for reviews / comments from experts (within this field of research) who rebut / refute any of the findings of the report “Final investigation on serious allegations of fabricated and/or falsified data in Al-Sheikhly et al. (2013, 2015)”;
    (3): this guestpost does not contain any details about the existence of the report “Final investigation on serious allegations of fabricated and/or falsified data in Al-Sheikhly et al. (2013, 2015)”.

    It seems likely that this guestpost by Virginia Barbour is a result of my comments on a new version of the draft of the new version of the Australian Code for the Responsible Conduct of Research (ACRCR), see https://osf.io/ma85h/

    The University of Queensland (UQ) is at the moment still processing a formal complaint against Virginia Barbour with serious accusations of “wilful concealment or facilitation of research misconduct by others”. This complaint was filed in the end of February 2016. The terms “procedural fairness” and “natural justice” in the ACRCR indicate that this long period of processing this formal complaint means that is it tough / impossible for OQ to rebut / refute the serious allegations of “wilful concealment or facilitation of research misconduct by others” by Virginia Barbour.

    Lyndon Branfield, Senior Legal Counsel of the General Counsel’s Office of publisher BioMed Central Limited, told me on 25 August 2017 in regard to the processing of this complaint: “Until The University of Queensland have made a public finding, no further action will be taken in this matter.” This has until now not happened.

    Virginia Barbour wrote in this guestpost about me: “As a membership organisation, COPE does not have regulatory authority over journals or publishers”.

    COPE is able to terminate the membership of their members and is thus able to publish a press release / posting on its website in which they state that they have terminated the membership of their member Taylor & Francis for the refusal of TF to retract the fraudulent study on the breeding biology of the Basra Reed Warbler.

  11. This reminds me of a recent incident in the Economics community. A paper published (and celebrated) by the flagship journal of Economics profession, American Economic Review, was basically a blatant copy of an already published paper. When the anonymous (mostly) economist community on econjobrumors.com pointed it out, the whole community was branded misogynist by the powerful econ mafia (read people at top ranked places like Harvard/MIT/Stanford/Berkeley and other such places). Even some economists who were bold enough to criticize the authors were harassed (on the pages of no other than NYT) or forced to change their commentary (looking at you Brett). At the end, the copiers just added a footnote (infamously known as “footnote 7”) and got their coveted top 5 (for which many hard-working economists would give an arm and a leg).

    Overall, it is pretty depressing, and shows that just like everywhere in academia too people with connection and power can get away with anything.

    You can read about the entire saga here: https://www.econjobrumors.com/topic/new-family-ruptures-aer-nber-is-rip-off-of-obscure-paper

    The original paper: http://www.ncbi.nlm.nih.gov/pubmed/21321257
    The copy: https://www.aeaweb.org/articles?id=10.1257/aer.20141406

    • Mamma Mia wrote: “Overall, it is pretty depressing, and shows that just like everywhere in academia too people with connection and power can get away with anything.”

      Yup.

      Virginia Barbour is already for a prolonged period of time a member of the editorial board of the BioMedCentral journal “Research Integrity and Peer Review” https://researchintegrityjournal.biomedcentral.com/about/editorial-board

      I had contacted this journal about the formal complaint to the University of Queensland (UQ) with serious accusations of “wilful concealment or facilitation of research misconduct by others” by Virginia Barbour. UQ was at that time still processing this formal complaint and had at that time not yet rebutted / refuted this serious allegation. Lyndon Branfield, see above, responded on behalf of the editors of this journal.

      UQ is at the moment still processing this formal complaint. So UQ has still been unable to refute / rebut the allegation of “wilful concealment or facilitation of research misconduct by others” by Virginia Barbour. It is thus towards my opinion remarkable that this journal still lists Virginia Barbour as one of their members of the editorial board of this journal.

      The journal “Research Integrity and Peer Review” does not publish lots of papers. They have until today published 4 articles for 2018. One of them is some sort of editorial, another one is a paper co-authored by Virginia Barbour. See https://researchintegrityjournal.biomedcentral.com/

      The choice of the editors of the journal “Research Integrity and Peer Review” to maintain Virginia Barbour as member of its editorial board tells towards my opinion alot about this journal.

  12. Andrew wrote: “I’d say the whole thing is a big joke, except that I am being personally attacked in scientific journals, the popular press, and, apparently, presentations being given by public figures. I don’t think these people care about me personally: they’re just demonstrating their power, using me as an example to scare off others, and attacking me personally as a distraction from the ideas they are promoting and that they don’t want criticized. In short, these people whose research practices are being questioned are engaging in ad hominem attacks in scientific discourse, and I do think that’s a problem.”

    Andrew is correct.

    Virginia Barbour et al. have used their power to publish a severe ad hominem attack on me at https://retractionwatch.com/2017/03/23/agreed-listen-complaint-paper-harassment-began/ See for backgrounds my comment at http://statmodeling.stat.columbia.edu/2018/05/17/think-research-research-criticism-research-criticism-criticism-research-criticism-criticism-criticism/#comment-737186

    Virginia Barbour et al. have also used their power to publish a Statement in a scientific journal about the refusal of COPE to work together with us to get retracted the fraudulent study on the breeding biology of the Basra Reed Warbler.

    This Statement was published in the same journal which is refusing to retract the fraudulent study. See https://www.tandfonline.com/doi/full/10.1080/09397140.2016.1172405?src=recsys

    This Statement was published on 4 April 2016. The University of Queensland (UQ) was at that time processing a formal complaint against Virginia Barbour for serious allegations of research misconduct (‘wilful concealment or facilitation of research misconduct by others’). This complaint was filed in the end of February 2016.

    It is stated in this Statement: “In this respect, we note the serious accusations levelled against the Committee on Publication Ethics (COPE), and against the Chair of COPE Dr Virginia Barbour in particular. We strongly censure these accusations. COPE and Dr Barbour have acted professionally and appropriately at all times in attempting to help achieve a resolution to the concerns raised. COPE is an advisory body and its remit expressly excludes the investigation of individual cases.”

    We were not informed about this Statement, not during the preparation and also not when it was published.

    Virginia Barbour et al. have also used their power to add a specific ad hominem attack on me in an Expression of Concern which was published on 5 July 2016 at
    https://www.tandfonline.com/doi/full/10.1080/09397140.2016.1208389?src=recsys

    “In the light of sustained and unwarranted allegations, we have also published this Statement. We note these allegations were not made by any of the authors of the “Comment” and the “Rejoinder”.”

    The last sentence was added lateron. The University of Queensland is still processing this formal complaint against Virginia Barbour.

    John Burton is one of the many co-authors of Porter et al. John Burton wrote on 6 September 2015 to COPE (and to others):

    “a summary of the whole affair, but concentrating on the Academia cover up in Nature?”

    See https://www.worldlandtrust.org/who-we-are/staff/john-burton/ for backgrounds about John Burton.

  13. I haven’t met anyone who likes criticism, though attention to it can be invaluable. We may need a couple of drinks, some rumination, kicking the dog or whatever to examine criticism with anything approaching objectivity.
    Anaya’s email has nothing in it to argue with, though it borders on an argument by verbosity. I suspect that reflects the degree to which criticism got under his skin, even though he anticipated it. At least that’s frequently my temptation when criticized.
    And I think Joe has a point in that ultimately we want to eradicate systemic errors. I do not believe this precludes criticism of individual studies or a body of work. I am very appreciative of retractionwatch.com and believe such efforts are necessary to bring systemic errors to light.

Leave a Reply to Darren Dahly Cancel reply

Your email address will not be published. Required fields are marked *