The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition.
Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe:
In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . .
The bias against corrections is especially harmful in areas where the results are cheap, but the underlying measurements are noisy. . . .
Wait, I thought there was a big rise in retraction rates that has everyone freaking out. Isn’t there a website just dedicated to outing and shaming people who retract stuff? I think registry of study designs for confirmatory research is a great idea. But I wonder what the effect would be on reducing the opportunity for scientific mistakes that turn into big ideas. This person needs to read the ROC curves of science. Any basic research system that doesn’t allow for a lot of failure is never going to discover anything interesting.
I think Leek might be missing the point here. Nobody is suggesting “not allowing” for the publication of scientific mistakes. The same old trial and error pattern can continue, in which researchers are allowed, even encouraged, to publish speculative, high-risk high-return sorts of research. The point is to suggest there should be fewer barriers to the publication of criticism of this sort of high-profile publication. As it is now, you can make a dramatic claim based on a couple of survey questions filled out by 150 Mechanical Turk participants, publish it in Psychological Science, and get press around the world. Meanwhile, criticism of this work ends up appears in blogs etc. The most notorious example was that Bem paper where the journal that published his ridiculous paper refused to publish the failed replications.
So, yes, let’s give people the chance to publish and fail, even to embarrass themselves! In the grand scheme of things, a scientist who publishes 9 failures and one major success may well be making more of a positive contribution than an otherwise equally placed scientist who publishes 10 boring minor incremental contributions, even if they happen to be correct. But let’s move that process forward and make it easier for people to point out the mistakes in those 9 failures sooner. Science can be self-correcting, but that “self-correcting” process is the result of actions of many individuals in the system, and right now I do agree with Holcomb the barriers to the publications of replications and criticism are too high.
Leek does have a point, though, that critics such as Holcomb (and me!) don’t provide any statistical data to back up our claims. I have a general feeling (supported by data collected by researchers such as Uri Simonsohn) that lots of shaky research gets published, and I have a lot of anecdotal evidence, but that’s about it.
But, hey, that’s fine! As noted above, I don’t oppose the publication (in journals or in blogs) of speculation, as long as the source of the relevant data (in this case, all my anecdotes) is made clear. And others can criticize. (Indeed, this blog has an active comment section, and when people email me with longer comments, I often post them as full entries to give readers both sides of the story.)
Open debate will not solve all problems. But let me get back to the key point: as Holcomb says, the self-correction that is central to the scientific process can be slowed down or it can be helped along. Working to improve the self-correction process is not equivalent to “reducing the opportunity for scientific mistakes that turn into big ideas.” Rather, it’s about getting to those big ideas more effectively.
P.S. One issue that came up in the discussion is that a published criticism could well be a lesser piece of work, compared to original research.
Indeed, if I think a paper is flawed and I go to the trouble of criticizing it, I probably won’t want to put an equivalent effort into the criticism as the original researcher put into the published paper. After all, I think the work is flawed! It’s probably not worth a huge amount of effort. Still, I think the criticism should be published, and I think it’s a big mistake when journals reject criticisms because they’re not major research contributions.
One problem, I think, is that publication in journals is not just about the science, it’s a scarce measure of value that’s used in hiring, promotion, grant review, etc. So journal editors have the same attitude about restricting publication as the U.S. Treasury has about printing a few more dollars.
With that issue in mind, I’d have no problem if journals were to flag corrections and replications, so that tenure committees etc would recognize that these are not original research. So, for example, if you publish a replication or non-replication of a Cell article and this replication or non-replication appears in Cell too (this is under my ideal new regime where it is encouraged for people to publish criticisms and replications in the same journal as published the original article), you won’t get credit for “a Cell article.” For example, the official journal title for the article could be Cell Criticisms and Replications, or something like that. This way, people don’t have to worry that the credit for their original research is being diluted by endless trivial criticisms and replications. But science as a whole would benefit, because these criticisms and replications would be right out there in the literature for anyone to read.