David Allison told me about a frustrating episode in which he published a discussion where he pointed out problems with a published paper, and the authors replied with . . . not even a grudging response, they didn’t give an inch, really ungracious behavior. No “Thank you for finding our errors”; instead they wrote:
We apologize for the delay in the reply to Dr Allison’s letter of November 2014, this was probably due to the fact that it was inadvertently discarded.
Which would be kind of a sick burn except that they’re in the wrong here.
Anyway, I wrote this to Allison:
Yeah, it would be too much for them to consider the possibility they might have made a mistake!
It’s funny, in medical research, it’s accepted that a researcher can be brilliant, creative, well-intentioned, and come up with a theory that happens to be wrong. You can even have an entire career as a well-respected medical researcher and pursue a number of dead ends, and we accept that; it’s just life, there’s uncertainty, the low-hanging fruit have all been picked, and we know that the attempted new cancer cure is just a hope.
And researchers in other fields know this too, presumably. We like to build big exciting theories, but big exciting theories can be wrong.
But . . . in any individual case, researchers never want to admit error. A paper can be criticized and criticized and criticized, and the pattern is to not even consider the possibility of a serious mistake. Even the authors of that ovluation-and-clothing paper, or the beauty-and-sex-ratio paper, or the himmicanes paper, never gave up.
It makes no sense. Do these researchers think that only “other people” make errors?
And Allison replied:
The phenomenon you note seems like a variant on what psychologists call the Fundamental Attribution Error.
Interesting point. I know about the fundamental attribution error and I think a lot about refusal to admit mistakes, but I’d never made the connection. More should be done on this. I’m envisioning a study with 24 undergrads and 100 Mechanical Turk participants that we can publish in Psych Sci or PPNAS if they don’t have any ESP or himmicane studies lined up.
No, really, I do think the connection is interesting and I would like to see it studied further. I love the idea of trying to understand the stubborn anti-data attitudes of so many scientists. Rather than merely bemoaning these attitudes (as I do) or cynically accepting them (as Steven Levitt has), we could try to systematically learn about them. I mean, sure, people have incentives to lie, exaggerate, cheat, hide negative evidence, etc.—but it’s situational. I doubt that researchers typically think they’re doing all these things.