This came up in a discussion last week:
We were talking about problems with the review process in scientific journals, and a commenter suggested that prepublication review should be more rigorous:
There are lot of statistical missteps you just can’t catch until you actually have the replication data in front of you to work with and look at. Andrew, do you think we will ever see a system implemented where you have to submit the replication code with the initial submission of the paper, rather than only upon publication (or not at all)? If reviewers had the replication files, they could catch many more of these types of arbitrary specification and fishing problems that produce erroneous results, saving the journal from the need for a correction. . . . I review papers all the time and sometimes I suspect there might be something weird going on in the data, but without the data itself I often just have to take the author(s) word for it that when they say they do X, they actually did X, etc. . . . then bad science gets through and people can only catch the mistakes post-publication, triggering all this bs from journals about not publishing corrections.
I responded that, no, I don’t think that beefing up prepublication review is a good idea:
As a reviewer I am not going to want to spend the time finding flaws in a submitted paper. I’ve always been told that it is the author, not the journal, who is responsible for the correctness of the claims. As a reviewer, I will, however, write that the paper does not give enough information and I can’t figure out what it’s doing.
Ultimately I think the only only only solution here is post-publication review. The advantage of post-publication review is that its resources are channeled to the more important cases: papers on important topics (such as Reinhart and Rogoff) or papers that get lots of publicity (such as power pose). In contrast, with regular journal submission, every paper gets reviewed, and it would be a huge waste of effort for all these papers to be carefully scrutinized. We have better things to do.
This is an efficiency argument. Reviewing resources are limited (recall that millions of scientific papers are published each year) so it makes sense to devote them to work that people care about.
And, remember, the problem with peer review is the peers.