Skip to content

“La critique est la vie de la science”: I kinda get annoyed when people set themselves up as the voice of reason but don’t ever get around to explaining what’s the unreasonable thing they dislike.

Someone pointed me to a blog post, Negative Psychology, from 2014 by Jim Coan about the replication crisis in psychology.

My reaction: I find it hard to make sense of what he is saying because he doesn’t offer any examples of the “negative psychology” phenomenon that he discussing. I kinda get annoyed when people set themselves up as the voice of reason but don’t ever get around to explaining what’s the unreasonable thing they dislike.

I read more by Coan and he seems to me making a common mistake which is to conflate scientific error with character flaw. He thinks that critics of bad research are personally criticizing scientists. And, conversely, since he knows that scientists are mostly good people, he resists criticism of their work. Well, hey, probably 500 years ago most astrologers were good people too, but this doesn’t mean their work was any good to anyone! It’s not just about character, it’s also about data and models and methods. One reason I prefer to use the neutral term “forking paths” rather than the value-laden term “p-hacking” is that I want to emphasize that scientists can do bad work, even if they’re trying their best to do good work. I have no reason to think that John Bargh, Roy Baumeister, Ellen Langer, etc. want to be pushing around noise and making unreplicable claims. I’m sure they’d love to do good empirical science and they think they are. But, y’know, GIGO.

Good character is not enough. All the personal integrity in the world won’t help you if your measurements are super-noisy and if you’re using statistical methods that don’t work well in the presence of noise.

And, of course, once NPR, Gladwell, and Ted talks get involved, all the incentives go in the wrong direction. Researchers such as Coan have every motivation to exaggerate and very little motivation to admit error or even uncertainty.

My correspondent responds:

This is also a problem in medicine as I am sure you already know. This effect should be named: so much noise it makes you deaf to constructive criticism :) Unfortunately, this affects many people’s lives and I think it should be brought to light. Besides, constructive criticism is one of the pillars of science.

As Karl Pearson wrote in 1900:

In an age like our own, which is essentially an age of scientific injury, the prevalence of doubt and criticism ought not to be regarded with despair or as a sign of decadence. It is one of the safeguards of progress; la critique est la vie de la science, I must again repeat. One of the most fatal (and not so impossible) futures for science would be the institution of a scientific hierarchy which would brand as heretical all doubt as to its conclusions, all criticism of its results.

P.S. This post happens to be appearing shortly after a discussion on replicability and scientific criticism. Just a coincidence. I wrote the post several months ago (see here for the full list).


  1. Matt Skaggs says:

    Presumably the readers who came over from the New York Times have moved on.

    One thing I have noticed is that academia has an odd, often dysfunctional “auditing” mechanism. The production of a peer-reviewed scientific paper is a process. Processes run off the rails. The folks that are trained to keep the process running smoothly are auditors. Expertise in auditing does not, and should not, imply expertise in problem solving.

    Typical processes outside of academia involve a desire to minimize variation. Obviously, that is not a goal in writing scientific papers. This unfortunately leads to innovation in things like statistical approaches when no innovation should be present. Since peer review is not particularly effective in a general sense, the result is a lack of critical review of the work and some really nutty stuff getting through. In some cases, it seems that razzle-dazzle statistical treatments add a wow factor if the math is beyond the comprehension of the peer reviewers.

    With these thoughts in mind, a professor of statistics should absolutely be using his knowledge of good scientific practices to audit published works, and whether “help” is offered is beside the point:

    Criticism: Your paper should have been preregistered.
    Help: Your next paper should be preregistered.

    No matter how well you audit, sooner or later someone will call you a jerkface. It is a thankless task.

    • Ben Prytherch says:

      “In some cases, it seems that razzle-dazzle statistical treatments add a wow factor if the math is beyond the comprehension of the peer reviewers”


    • TL says:

      In reality, a lot of science operates on eminence-based auditing.

      If you discover a flaw in a piece of research and bring it into the open, your resume is matched against that of the PI of the flawed research. If their resume is more impressive than yours, you get called a methodological terrorist and a bully, and nothing happens. Even if the paper eventually gets corrected/retracted, they keep their veneer of authority under some excuse, e.g. that the paper was written by a former post-doc who went back to their country and were never heard of again (which somehow makes everything OK).

  2. Jim Coan says:

    Thanks for your comments. I made a decision to avoid specific examples (other than Neuroskeptic, who’s anonymous), because I didn’t want to single anyone out. I found it more comfortable to speak in generalities, about which you are of course free to disagree with. I totally understand why this would be frustrating, and see things like your frustration as the tradeoff I was willing to make in order to avoid causing anyone harm or getting into a protracted electronic argument. I also completely agree with you that we shouldn’t conflate scientific errors with character flaws. Good people can do shitty science and creeps can do good science. I guess my main points were (or were intended to be): Civility, courtesy and professional decorum are worthy goals (so thanks for “forking paths”); that moral outrage is easy, plentiful on social media, and should not be conflated with scientific rigor; and that moral outrage and uncivil behavior can discourage scientists from creative pursuits. As a corollary, I suggested (and continue to suggest) that we spend more time working out disagreements in person or over the phone. If you’re thinking these aren’t earth-shattering points, I don’t disagree. Happy to discuss further via phone! Or let me know if you ever get down to Charlottesville!

    • Andrew says:


      Thanks for the reply. After the recent flap based on that NYT article, I definitely see the drawbacks of naming specific examples! Big arguments are negative-sum, and it can be worth trading off some clarity and specificity in order to avoid the quicksand.

      • Jack says:

        Andrew you should keep naming names, that’s the only kind of exposure that changes behavior. When making a gengeral comment, people never think it’s them. Nobody would see a problem with the Cuddy’s research, with Wasinsk research if it weren’t for the names. Their friends would be like “oh, I know this kind of behavior is bad, but Cuddy/Wasinsk do not do it! they are great scientists, high-profile, great institutions! how could they!?”

    • Anonymous says:

      “As a corollary, I suggested (and continue to suggest) that we spend more time working out disagreements in person or over the phone”

      That would be a shame. Science to me seems all about disagreement, and discussion. Science to me seems all about making these things public so others can learn from them, and can participate in the discussion.

      It is not clear to me what you exactly propose…Should we all be phoning scientific results to each other? Should you have phoned people to read them your blog post? Should you have phoned prof. Gelman to read him your comment here?

      I sincerely do not understand what it is you are trying to make clear. This is exacerbated by this section in your blog post:

      “So here is one broadly generalizable idea: let’s actually, literally, talk to each other. And talk not only for the purpose of accurate replication, but also when the impulse arises to publicly criticize. Friend and fellow EGAD alum Patrick McKnight has also suggested that we should collaborate more and more often—that indeed we need to find better ways to reward collaborative problem solving instead of individual paper production.”

      I followed the hyperlink of/below the words “we should collaborate more” and it brought me to a page about open science collaboration, which to me is about openness, inclusivity of participation, etc. For instance, that page includes the sentence: “Broadcasting problems openly increases the odds a person with the right expertise will see it and be able to solve it easily.”

      To me, these things are just about the exact opposite of “spending more time working out disagreements in person or over the phone” and “happy to discuss via phone!”.

      I am genuinely puzzled…

      • Jim Coan says:

        Boiled down, your point is that it’s best if critiques and discussions are public, so that others can benefit from them. I agree. In fact, I kind of wish I didn’t agree, because I really do think that things are more likely to be worked out effectively and collaboratively in person or over the phone. By “effectively and collaboratively,” I mean to say brought to a conclusion that satisfies everyone with a minimum of animosity. To that end, I even started a podcast where I record conversations I have with other scientists about everything from their work to their lives and, yes, to controversies in the field. It sounds ridiculous perhaps to record in-person interactions, but, you know, I don’t know–I’m trying it out. Whatever else is true, I think you raise a vitally important point and I take it seriously. Thanks for your comments!

    • Jack says:

      Discuss over the phone. Right, so if I don’t have time to call everyone who writes non-sense, I better keep quiet. Seriously, these prople just want to play the publishing game and not be bothered.

  3. Marcus says:


    I think that there is sometimes a tendency to see those of us who engage in what you call “negative psychology” as the unreasonable parent yelling at their kid who is happily playing in his room because his room is a little messier than desired. That’s not how many of us see it; we are yelling that the entire house is on fire and that we all need to get out now, call the fire fighters, and possibly demolish the house so that it can be rebuilt in a safer manner.

    Perhaps there is a tendency to pick on some individuals a little too much but consider the following (all examples from my specific field): in a stunning proportion of cases data, findings, and hypotheses from a dissertation mysteriously change as that dissertation turns into a journal article (O’Boyle et al., 2017); almost 40% of articles claim to test specific theoretical models but then actually test quite different models without disclosing this change (Cortina et al., 2016), a third of all articles using structural equation model report fit statistics that are internally inconsistent/impossible (a paper of ours in review right now); 50% of researchers admit to engaging in HARKing and 29% admit to selectively excluding data in order to support hypotheses (Banks et al., 2016); and 72% of researchers admit to p-hacking and 78% fail to report all dependent variables (John et al., 2012). In my view this is not a “messy room” scenario but a “house is on fire” scenario and I am actually surprised that we trust anything put out by researchers in my field. I am not sure if other areas of psychology are any better. Would you eat at my restaurant if I told you that there is 50% chance that you’d get food poisoning?

    • Jim Coan says:


      I hear you–I do. I will cop to a dispositional preference toward collegiality that risks papering over real problems. Taken too far, such a preference would be bad for all involved, which is something I sometimes need to be reminded of. My own approach–one that in this instance has frustrated Andrew, and maybe you, too–is to avoid naming particular individuals if I can. I’ve written papers that are very critical of whole domains of research that *may* have been less impactful for being too nice, I’m not sure. In any case, I still do think the kinds of moral outrage I’ve seen expressed in social media have taken it too far in the other direction. Moreover, I really do realize that the my field suffers from significant methodological problems and I value the tools on offer from the open science movement, as well as the advice from people like Andrew. (My lab is planning on releasing new datasets to the public, preregistering, etc., and we are always publishing methodological pieces on the neuroimaging tools we use.). Finally, if you told me I had a 50% of food poisoning at your restaurant, I’d probably call the police!

    • Nick says:

      I was interested to see ( that computer science also has problems with replicability, including familiar problems such as authors simply not responding to email requests for their code, but also code simply failing to compile.

      It is tempting to conclude that a wide range of scientific fields may contain a very substantial proportion of work that is somewhere on the continuum from shoddy to fraudulent. I hope nobody with responsibility for government funding of science comes to that conclusion.

      • Jim Coan says:

        I share that hope–or maybe I hope that whoever is responsible for government funding has the presence of mind to realize that as bad as things may well be, science remains the most successful human project in history.

  4. Ayse Tezcan says:

    So nothing of this scientific criticism discourse is new. Historically, the scientific community has always processed and critiqued the evidence to create consensus. Starting from graduate school, we are continually questioned on the reasons of choices and merits of our study methods and approaches, and our work is expected to withstand rigorous scrutiny of experts. The science historian Naomi Orekes points that every new scientific finding/knowledge goes through a rigorous process called ‘organized skepticism’ (1). This process makes science reliable and lays good foundation to build upon. However, what is new, now, is the presence of prying eyes of public. Public participation in these discussions has important implication such as education and awareness. On the other hand, this public exposure has some unintended consequences – i.e. strong defensiveness by the critiqued. To me, the most significant unintended consequence of this new world is though the diminishing credibility and trustworthiness of science and scientists. I don’t mean we stop doing what is necessary for the betterment of science but we should be cognizant of the effects of this discourse on public and their response. It may be difficult for those outside the scientific community to understand the benefits of this critiquing but are quick to jump to the conclusion of ‘the scientists don’t know what they are doing’ and stop trusting what science produces. I don’t claim to know the right way of doing this but it is important keep in mind because after all, we are not doing research only for the scientific community but for the public consumption.

    On the character of the scientists: As I watch the number of retractions, confirmed misconducts, replication problems and plagiarisms grow on my daily Retraction Watch feed, I agonize thinking that the research world is fraught with wicked people. Of course, because I read these posts daily, my perception is highly skewed. However, I believe the distribution of people with potential for misconduct most likely the same in both the science world and the general population but we hold the scientists to higher standards. The scientists, of course, are human beings with their own biases and fallibility. They may be, consciously or unconsciously, tempted (or do) to manipulate their research design/data/analysis to fit the desired outcome or believe in results that confirm their biases. The career advancement and research funding systems based on the number of publications and impact scores, and preferential publication of positive findings may also instigate unethical choices. However, I think the majority of flimsy research is likely stem from a lack of knowledge for proper study design or ignorance – not to say any more acceptable – and has nothing to do with a character flow.

    Btw, thanks to the NYT article, I have discovered this blog – great way for continuing education for those of us who are no longer in academic environment. :-)


  5. Pierre Dragicevic says:

    I believe Pearson wrote “an age of scientific inquiry” and not “injury”. That would have been 117 years too early.

  6. sentinel chicken says:

    I don’t want to, or intended to, speak for Jim, but let me offer this observation: I think what Jim understands intuitively is that it’s hard to have productive conversations and offer constructive criticism about tough issues, like the validity of one’ s life work, when one of the participants is feeling defensive. And it’s much easier to arouse defensiveness through electronic communication channels, especially public channels, where the participants don’t have to feel the emotions of the other, and each has room in their mind to project any number of hostile tones and intentions. When you speak with someone in real time, hear their voice, perceive and detect tone and inflection, it’s not as easy to perceive them negatively, feel attacked or lash out. (Think of all the times you’ve imagined how you are going to tell someone off the first chance you got but when you are finally face-to-face you just couldn’t do it.) I think what Jim is getting at is that we should try talk to each other like humans, with a modicum of perspective taking and empathy. Science is ultimately a human pursuit. As much as we want it to be rational, quantitative, precise and systematic, it will never be that way as long as humans are involved. Which means, it will never be that way! Instead of getting mad when it can’t be perfect, or feeling like someone is being deceitful if they don’t air their grievances in public, let’s be realistic and practical about how we go about making science better. Jim is being realistic and practical in suggesting that we privilege human-to-human interactions over blog-comment-section flame wars and trolling. His intentions are good and right. He is, after all, a psychologist. He just might know a think or two about humans.

    • Martha (Smith) says:

      “… it’s much easier to arouse defensiveness through electronic communication channels, especially public channels, where the participants don’t have to feel the emotions of the other, and each has room in their mind to project any number of hostile tones and intentions. When you speak with someone in real time, hear their voice, perceive and detect tone and inflection, it’s not as easy to perceive them negatively, feel attacked or lash out.”

      I recognize that this applies to some (perhaps most?) people, but please bear in mind that this (like most generalities) doesn’t apply to all people. I for one prefer written to oral communication of anything that might provoke emotions. Oral settings require immediate response — and that is more difficult/demanding for me than to be able to sleep on something, or write an initial response then revise it.

    • Jim Coan says:

      Sentinel Chicken: You may speak for me any time you like! Thank you for being a better me than I was.

Leave a Reply