The never-back-down syndrome and the fundamental attribution error

David Allison told me about a frustrating episode in which he published a discussion where he pointed out problems with a published paper, and the authors replied with . . . not even a grudging response, they didn’t give an inch, really ungracious behavior. No “Thank you for finding our errors”; instead they wrote:

We apologize for the delay in the reply to Dr Allison’s letter of November 2014, this was probably due to the fact that it was inadvertently discarded.

Which would be kind of a sick burn except that they’re in the wrong here.

Anyway, I wrote this to Allison:

Yeah, it would be too much for them to consider the possibility they might have made a mistake!

It’s funny, in medical research, it’s accepted that a researcher can be brilliant, creative, well-intentioned, and come up with a theory that happens to be wrong. You can even have an entire career as a well-respected medical researcher and pursue a number of dead ends, and we accept that; it’s just life, there’s uncertainty, the low-hanging fruit have all been picked, and we know that the attempted new cancer cure is just a hope.

And researchers in other fields know this too, presumably. We like to build big exciting theories, but big exciting theories can be wrong.

But . . . in any individual case, researchers never want to admit error. A paper can be criticized and criticized and criticized, and the pattern is to not even consider the possibility of a serious mistake. Even the authors of that ovluation-and-clothing paper, or the beauty-and-sex-ratio paper, or the himmicanes paper, never gave up.

It makes no sense. Do these researchers think that only “other people” make errors?

And Allison replied:

The phenomenon you note seems like a variant on what psychologists call the Fundamental Attribution Error.

Interesting point. I know about the fundamental attribution error and I think a lot about refusal to admit mistakes, but I’d never made the connection. More should be done on this. I’m envisioning a study with 24 undergrads and 100 Mechanical Turk participants that we can publish in Psych Sci or PPNAS if they don’t have any ESP or himmicane studies lined up.

No, really, I do think the connection is interesting and I would like to see it studied further. I love the idea of trying to understand the stubborn anti-data attitudes of so many scientists. Rather than merely bemoaning these attitudes (as I do) or cynically accepting them (as Steven Levitt has), we could try to systematically learn about them. I mean, sure, people have incentives to lie, exaggerate, cheat, hide negative evidence, etc.—but it’s situational. I doubt that researchers typically think they’re doing all these things.

31 thoughts on “The never-back-down syndrome and the fundamental attribution error

  1. >”the methods reported in the paper have not been reported in sufficient detail and clarity to determine exactly what was done”

    This is standard, I dont see why Cuspidi et al should be focused on for that… I am certain the preceeding and subsequent papers in the same journal suffer the exact same flaw.

  2. Why does the Fundamental Attribution Error need to be called “Fundamental”? Why not just “Attribution Error”? Perhaps because the shorter titled doesn’t sound big and exciting enough.

    • If some deep and important truth about differentiation and integration can be called the Fundamental Theorem of Calculus, why can’t we call some vague and situational half-truth in psychology the Fundamental Attribution Error?

    • The term Fundamental refers not to the Error, but to the Attribution. That is, determining whether a given behavior was primarily caused by features of the person or features of the situation is the most fundamental aspect of attribution.

    • Hi!

      Bit of a neophyte here, but I’ve got a sincere if perhaps ignorant question to ask: what exactly is wrong with Amy Cuddy’s response to the dust-up over her 2010 study? Is it that she continues to believe in and argue for the existence of behavioral effects of power posing (while acknowledging that the physiological effects are less robust)? Or is it simply that she’s not more upfront about the fact that there may have been serious errors in the original 2010 study?

      I guess I’m just wondering what her reception would have been if she had said something like: “I admit it: there were serious flaws in the original study. Yet despite those, we happened on a real effect, and the behavioral effect has been replicated a number of times, so I still find evidence to believe that it is real and worth studying and so I’ll continue on with my research program.”

      • Kaz:

        Cuddy is free to believe what she wants and to speculate accordingly, but the basic problem is that the Carney Cuddy Yap 2010 paper provides essentially zero information about power pose. There’s also a problem in that now she claims that her key finding is that “adopting expansive postures causes people to feel more powerful”—but that was not a featured claim in the 2010 paper. So to start with, she’d have to accept that the 2010 paper is empty of useful empirical content. At this point, any empirical scientific claim of hers in favor of power pose would have to be based on some evidence. And in her letter she doesn’t give any such evidence, she just refers to some set of unnamed replications and a meta-analysis that she does not share with us. So, sure, she can feel free to say that she believes it’s real. But one of her key arguments was not just that she had a strong belief, but also that she has run experiments supplying strong scientific evidence for her claim. At this point, (a) I don’t see the evidence, and (b) any such evidence is not coming from others’ experiments. If she could accept all this, that would be great. But as it is, she’s not even up to acknowledging the problems with that 2010 paper.

        • Hi Kaz-kaod and Andrew:

          Carney’s document states at the top “Regarding Carney, Cuddy, and Yap (2010)”. The dependent variables analyzed are (1) change in testosterone, (2) change in cortisol, (3) risk taking, and (4) power feelings, in that order. Cuddy’s reply to Carney, though, seems to be addressing power poses more generally, and to be restricting the “power pose effect” to be defined by (4) only. Very strange.

      • The original paper was titled “Power posing: brief nonverbal displays affect neuroendocrine levels and risk tolerance”, so claims of neuroendocrine and risk tolerance effects can’t really be called secondary. What was once a manipulation-check has become the major finding (though the lead author acknowledges this specific effect was p-hacked from “items that worked” and subject to experimenter demand (see Carney’s letter; http://nymag.com/scienceofus/2016/09/power-poses-co-author-i-dont-think-power-poses-are-real.html). Garrison et al suggest, that posing may lower feelings of power, so does endorsing power posing even make sense?
        This is a problem not of a specific claim, but rather of (some/most/whatever) scientists’ unwillingness to back down, as suggested in the post.

    • I wonder whether their respective attitudes to admitting error are both potential outcomes corresponding to the treatment “Ted Talk, big book deal, fame”

    • My impression (and I might be wrong) is that Carney took the time to try to understand the criticisms, but that Cuddy hasn’t.

      Another way to look at this might be that Carney open-mindedly considered the possibility that she might have been wrong, whereas it wasn’t obvious to Cuddy that she did anything wrong, so she didn’t seriously consider the possibility that she had done something wrong.

      So it might just boil down to being able to accept uncertainty (Carney) or being certainty-oriented (Cuddy). (I sometimes, only half-facetiously, describe the latter type of person as needing to be hit over the head by a frying pan before they can look at something in a different way from what they’re used to.)

      • I read it a few years ago, and found it a mixed bag. As I recall, it started out with some reasonable ideas, but then went too far — getting into certain-sounding statements that often sounded like they were not considering the possibility that they might be wrong (and, as I recall, seemed to get into the FAE — “This person did this because they were this way” rather than considering the circumstances.)

      • I read it too a few years ago before I was aware of the problems social psychology is facing. Many of the ideas were good, and worth bearing in mind (even if experiments were lacking). There was a good intro on false memories, I think (or it’s my own false memory). The general idea seems valid, though perhaps each study they cite could be taken apart.

  3. I once heard this called the “Foreward Paradox” in the context of philosophy.

    Consider the average philosophy book. It’s probably really dense, makes many claims and logical inferences, and is 1000+ pages long. It’s extremely unlikely the author made no mistakes, and they say so in the foreward, encouraging readers to point out any flaws. However, when somebody actually does so, the author would always argue about it and claim it’s not a mistake. People are much more willing to admit that a general trend exists than to confirm an actual specific instance of it.

  4. From above: “No, really, I do think the connection is interesting and I would like to see it studied further. I love the idea of trying to understand the stubborn anti-data attitudes of so many scientists. Rather than merely bemoaning these attitudes (as I do) or cynically accepting them (as Steven Levitt has), we could try to systematically learn about them. I mean, sure, people have incentives to lie, exaggerate, cheat, hide negative evidence, etc.—but it’s situational. I doubt that researchers typically think they’re doing all these things.”

    I think you are right that it should be studied. Not sure that the Attribution Error is really a measurable taxon able to explain these behavioural patterns. My recollection from learning psychometric testing (quite a while ago) is that there are persistent and apparently meaningful personality and other traits that occur in various occupations/roles. For example, airline pilots had a validity profile on the MMPI that was unique and identifiable — it was linked to traits of overconfidence and a sense of omnipotence. Could it be that the type of people that respond in this defensive uninsightful manner all have some commonality in terms of their personality and self-talk? Are people who get into this area/role in possession of some traits that the majority of their colleagues don’t have? I know some think that the trait is that they are smarter than the rest of us :( Is critique a threat to a sense of intellectual (or other) superiority? Are they used to imparting knowledge to students and see this as a one-way street? A lot of my clinical (aka coalface types) colleagues and I regularly talk about research that often isn’t that immediately translatable, relevant or applicable to clinical work. And so us hobbits publish in practice/discussion journals, discussing things like how do we translate epidemiological data into meaningful conversations with patients — all very second grade. But our discussions are quite open and useful — we are used to supervision, critique and fixing our stuff-ups — that is what we do. It would be fantastic having statistical or other subject matter experts providing useful (and free!) feedback. Great question Andrew.

  5. I misread this piece at first; I thought the implied connection was between the Fundamental Attribution Error and the studies themselves. Then I saw that the connection was between the Fundamental Attribution Error and the single-mindedness of researchers who deny the flaws of their work. (There seems to be a hint, also, that critics can fall prey to the same tendency.)

    But I think there may be a connection after all between the flawed studies and the Fundamental Attribution Error.

    These studies seem to share the trait of proposing a simplistic solution to a problem that requires complex systemic analysis. That seems to be the essence of the Fundamental Attribution Error.

    It makes sense, in that light, that those who propose simplistic solutions would also be averse to examining the methodology. Of course, there are exceptions, like Dana Carney.

    I do not mean that *simple* solutions are always wrong. There’s a difference between simple and simplistic. The former makes sense of complexity; the latter shuts it out.

  6. @Andrew:

    Your phrase “the stubborn anti-data attitudes of so many scientists” does not accurately describe the problem to me. I don’t think it’s an “anti-data” attitude, but a view of how to use data.

    I think Diana’s comments above partly get at this: “These studies seem to share the trait of proposing a simplistic solution to a problem that requires complex systemic analysis. … It makes sense, in that light, that those who propose simplistic solutions would also be averse to examining the methodology. … I do not mean that *simple* solutions are always wrong. There’s a difference between simple and simplistic. The former makes sense of complexity; the latter shuts it out.”

    Another aspect is an aversion to uncertainty, which I think Cuddy (and many others) show — hence my description of her as an “uncertainty denier” in a post a couple of days ago.

    • I agree; my impression of psychology researchers (esp. social psychology?) is that they want data, they rely on data, they call others to provide data (i.e. evidence), but this is intellectually void as in the end they don’t seem willing or capable of weighing the evidence (I’m sure this is where I should copy-paste the Meehl quote on using brains).

  7. Based on my experience as a consulting pension actuary, I am astounded that the never-back-down response is not a career ender. Our response to discovery of a serious calculation error would be as follows.
    If we find the error after calculation results have been communicated, but before anyone else finds the error, that is relatively good news. We double and triple check to make sure there really is an error and find the best possible positioning for the communication to the client. We move as quickly as possible to communicate before someone else discovers the error. In a way, it’s not really a calculation mistake if you find it first – the mistake was arguably the premature communication of results, at least optically.
    If the client or third party advisor (competitor) finds the error and tells us, that is bad news. Again we bite the bullet and follow a similar process as in 1.
    In either case, if the calculation error is serious, we would report that we have changed our procedures (such as to tighten up the peer review process) to make sure that such an error does not recur. You only get one or two free passes to improve your procedures before your credibility is shot.
    Digging in your heels when an error is uncovered would be, for an actuary, unthinkable, not only from an ethical standpoint, but from a personal standpoint. The personal pain of a serious calculation error for a professional does not end until the moment the error is communicated and the client reacts, often with understanding, because people do make mistakes. Sometimes there are financial implications, that can be severe, but that is why you have insurance. For anyone who needs more reasons, the fact that “the truth will out” should be motivation enough.
    I expect a scientist would be so committed to faithful exploration with a we-are-all-in-this-together approach, with other scientists dependent on one another’s results for their exploration that there would be no place for the Fundamental Attribution Error. If results are accurate, in concept they are a shared legacy. If results are inaccurate due to an error that a person allows to persist, that person owns them. A scientist who loves the conclusions of a study too much to let them go if proven wrong does not seem like a real scientist and maybe has become something else.

    • >”A scientist who loves the conclusions of a study too much to let them go if proven wrong does not seem like a real scientist and maybe has become something else.”

      I heard this on the radio the other day, I think it reflects well the types of motivations people have who are being rewarded:

      “PALCA: Just like scientists in the last century showed there was a link between smoking and lung cancer, Dus thinks she can find a link between an early exposure to a diet high in sugar and obesity.

      DUS: So that we can stop talking about really shaming people about the willpower and focusing on the biochemistry and the public health.

      PALCA: If she can do that, she says…

      DUS: I will be a very happy person (laughter).

      PALCA: She now has five years of funding from the National Institutes of Health to try. Joe Palca, NPR News.”
      http://www.npr.org/templates/transcript/transcript.php?storyId=496560373

      It isn’t about figuring out what is going on anymore, it is about proving what *must be true* because it agrees with your politics, to keep your grants, or to get that drug approved, etc. These types of comments are the standard, it is not understood that they reflect a huge problem. Shravan had some good posts on this a bit ago (about people telling him at conferences “Well, we can spin it this way”).

Leave a Reply to Kaz-koad Cancel reply

Your email address will not be published. Required fields are marked *