Skip to content
 

Journals and refereeing: toward a new equilibrium

As we’ve discussed before (see also here), one of the difficulties of moving from our current system of review of scientific journal articles, to a new model of post-publication review, is that any major change seems likely to break the current “gift economy” system in which thousands of scientists put in millions of hours providing free reviews. And these reviews can be pretty good. Doing pre-publication reviews at the request of a journal editor: that seems like an obligation, and it’s helped along by pressure from all those associate editors. Remove that system of social obligation, just tell people they can do post-publication reviews when they want, and you’ll see a lot less reviewing. And the reviewing that does get done will be disproportionately by people with an axe to grind. So that could be a problem.

So what’s the new equilibrium, if we move away from I-give-free-labor-to-Elsevier-by-reviewing-random-papers-for-their-journals-at-the-behest-of-equally-uncompensated-editors to open post-publication review? Is it just that zillions of things get published and a few of them get reviewed in an unsystematic manner?

I don’t know.

65 Comments

  1. Jeremy Fox says:

    The available data indicate that the answer to the question asked at the end of the post is “yes”. A very small fraction of the papers draw a large fraction of the comments on sites like PubPeer and (the now defunct) PubMed Commons, with most papers drawing no comments at all: https://dynamicecology.wordpress.com/2015/09/17/post-publication-review-is-still-only-for-the-scientific-1-pubmed-commons-edition/

    • Same thing is true with citations. It seems the truth is no one cares about vast swaths of what’s published.

      • Keith O'Rourke says:

        If we could only have confidence in the criteria for what’s cited.

      • jrkrideau says:

        . It seems the truth is no one cares about vast swaths of what’s published.

        But what ‘vast swaths’ is the problem.

        One could probably weed out a lot of bad or imitative research but who, at the time, thought that George Boole’s work was so important or that Thomas Bayes, hallowed be his name, was writing about anything of importance.

        • No doubt a lot of really important work doesn’t get cited until decades later when someone actually realizes how important it was.

          I think my point is that whether it’s important or not, the main thing that people “care about” (in the sense of “pay attention to”) is a small number of topics that are “hot” with respect to grant funding or academic political struggles or other “current” reasons. Probably *most* of the good science goes uncited because lots of scientists don’t really have much attention for real science. The ones I know certainly spend a lot of their time struggling with grant applications, and trying to get their controversial stuff past “reviewer number 2” who is actively trying to suppress it.

  2. Daniel Weissman says:

    I don’t think we have to give this up. Overlay journals really seem like the best of both worlds. First the work is public. Then it’s submitted and goes through the traditional peer-review process, with the added benefit that if the pre-print is generating a lot of discussion the reviewers can learn from it. Then the reviewed & revised version gets “published” in the overlay journal.

  3. Anonymous says:

    “So what’s the new equilibrium, if we move away from I-give-free-labor-to-Elsevier-by-reviewing-random-papers-for-their-journals-at-the-behest-of-equally-uncompensated-editors to open post-publication review? Is it just that zillions of things get published and a few of them get reviewed in an unsystematic manner?”

    I have been pondering about this, also in relation to some new developments where there now seems to be a third-party comment-feature on Psyarxiv pre-prints:

    https://www.facebook.com/groups/psychmap/permalink/640744112969181/

    I am not sure what to think of that. My 1st gut reaction is that it might not be a good thing for science.

    I can totally see individuals, or even groups of people, commenting on pre-prints with nonsense, which the author then has to “defend” by signing up to a third party comment-possibility-provider (?). Who knows what hoops you have to jump through in order to (in the future) sign up for it. (I am again amazed that all these “solutions” and “improvements” always seem to involve some third party whose involvement/power/role can be abused and manipulated).

    Note that i reason this is different from post-publication discussions on blogs/twitter/whatever, because the comments are not directly posted with the paper/pre-print.

    I am worried we might all be headed towards a system where the no. of “likes” and how “hip”/”open science”/”insert relevant term here”, the person/commenter is deemed to be that determines what is true, who to take seriously, or what is scientifically valid. I think i am seeing signs of this already, both in discussions, and in the way people talk about “improvements” in psychological science, especially on “social media”.

    Post-publication review in my reasoning consists of using (parts of) papers for your own work, or not. If, and how, you use papers is the post-publication review in and of itself.

  4. Grant says:

    Easy*, the universities are loathe to pay the journals for access (see recent toss up in Europe over subscriptions). So make this a part of faculty work in exchange for access to journals. You are expected to review articles in your field as part of your job.

    *This is in fact not easy, institutional arrangements will vary wildly and, without a system to allocate review time, top articles will get a lot of scrutiny while others will skate by with only a few eyeballs to look at them. This will also require wrestling the control of journals from the icy death-grips of the publishers.

  5. Honeyoak says:

    Isn’t Wikipedia a model with volunteer reviewing achiving accuracy?

    • Grant says:

      Yeah, and there’s feuds and flame wars and changes for anything even vaguely controversial. I would like to think that academics are better than that, but I’ve seen too many people with Ph.Ds on twitter to disabuse me of that notion.

    • Rheophile says:

      I think Wikipedia has a pretty similar problem, actually. When there is a large degree of interest and many potential editors, articles converge pretty rapidly onto something reasonable. But bad articles on specialized topics can stick around for a long time. There’s a long tail of very rough articles.

      I think this bodes poorly for post-pub peer review – most scientific articles are “in the tails,” so to speak.

  6. Alex says:

    Publishers use some of the ridiculous amount of money they make to pay reviewers for their work? You mentioned Elsevier; they apparently had profits of about $1.2 billion (https://en.wikipedia.org/wiki/Elsevier#Company_statistics) in 2017, so I think they can afford to pay some reviewers. And editors for that matter.

    • Anonymous says:

      “Publishers use some of the ridiculous amount of money they make to pay reviewers for their work? You mentioned Elsevier; they apparently had profits of about $1.2 billion (https://en.wikipedia.org/wiki/Elsevier#Company_statistics) in 2017, so I think they can afford to pay some reviewers. And editors for that matter.”

      But reviewers already get paid: their salary they get for doing their job as a researcher.

      Aside from that, i reason reviewers play an active part in making publication companies rich (and in the process giving unnecessary power and influence to journals, hereby screwing up science). They may even do this at the expense of the tax-payer who in a lot of cases make it possible for the researcher to do the research in the 1st place, and then have to pay to read the papers.

      Traditional journal pre-publication peer-review is for suckers, in more than one way i reason.

      • Nick says:

        That one point there isn’t really correct. There are a hell of a lot of researchers that don’t get paid a fixed wage and who are asked to review.

      • Alex says:

        Reviewers do not get paid to review. People who review more do not get paid more, at least certainly not directly. Professors who pass their reviews on to their students and post-docs do not directly pay them to review; they certainly don’t pay them more than other professors who don’t pass along their reviews pay their students and post-docs, since pretty much all research positions I’m aware of pay according to scales set by NIH and other such groups.

      • Anonymous says:

        Nick & Alex: thank you for the corrections/additional information.

        I wonder what the percentages are concerning reviewers with a pay-check and those without one. My guess is that the large majority of reviewers are researchers with a pay-check, but that could very well be totally wrong.

        Regardless: professors handing peer-review work to students is something i was not familiar with, is quite shocking to me, and provides a possible further reason why traditional peer-review doesn’t make much sense to me…

  7. Judgment and Decision Making does NOT make money at the expense of volunteer reviewers and editors (and authors). Rather, everything is volunteer, including production.

    We reject most submissions. I assure you that the publication of these rejected submissions at the same site as the accepted ones would not promote scientific progress. It would at best waste readers’ time, requiring them to figure out for themselves which articles are total b…, and then they wouldn’t do it. They would give up.

    The same can be said of many original submissions that are accepted after revision. The revision matters.

    And we probably publish too much.

    Post-publication review seems like a bad idea. The only time it works to let authors publish what they want without review, in my experience, is when the authors are pre-selected by membership in some exclusive club. This system has obvious disadvantages.

    • Anonymous says:

      1) “We reject most submissions. I assure you that the publication of these rejected submissions at the same site as the accepted ones would not promote scientific progress. It would at best waste readers’ time, requiring them to figure out for themselves which articles are total b…, and then they wouldn’t do it. They would give up.”

      I find this reasoning interesting, because to me it implies that the editor/the journal/the reviewers are somehow “extraordinary” to be able to “assure” these things. Aside from them actually being “extraordinary” (or not), i would personally not want to decide for others what they should view as total b.

      It is also interesting to me in another way. I reason this reasoning also implies that the majority of scientists working in academia (and submitting to your journal) are apparently unable to produce papers that are deemed publishable…

      Instead of praising (the role of) super journals/editors/reviewers, perhaps some more attention should go to training these scientists to be able to produce publishable papers to begin with…

      2) “The same can be said of many original submissions that are accepted after revision. The revision matters.”

      I assume the revisions are because of editor/reviewer comments. If correct, why aren’t these super-editors and/or reviewers made co-authors then? They apparently may have done more concerning scientific contributions than the original authors themselves…

      I mean, perhaps all these researchers with “good” papers on their CV’s were actually only made “good” by anonymous editors and/or reviewers…

    • Harry Crane says:

      I agree with much of what the previous Anonymous commenter has said. A few other comments.

      “The same can be said of many original submissions that are accepted after revision. The revision matters.”

      Of course peer review matters. The problem isn’t with peer review, but with how it is used. Pre-publication peer review is crucial to improve quality of work. Authors should want to go through this process as rigorously as possible. Instead, pre-publication peer review has been co-opted by editors who make accept/reject decisions. Sometimes the quality improves as a byproduct, but at the cost of the uncontrollable sociological and political juggernaut that is the current system.

      As Larry Wasserman has noted, the current peer review system was invented over 300 years ago. It no longer serves its original purposes in the modern era. We need a system that works for modern times. We should keep pre-publication peer review, but let it be directed by the authors for the purpose of authors improving their own work. No accept/reject decision by expert editors, AEs, anonymous referees. Let the future judge the quality of the work, and let other researchers comment openly in a post-publication peer review system. There are so many initiatives to improve the flawed current system. http://www.researchers.one is an initiative that I’ve been working on with Ryan Martin. We’re hoping it will roll out very soon — within a month or so.

      I’ll be discussing all of this with Larry Wasserman, Hal Stern, Corina Logan (from Bullied into Bad Science) and others at this year’s JSM session “The State of Peer-Review and Publication in Statistics and the Sciences”. http://ww2.amstat.org/meetings/jsm/2018/onlineprogram/ActivityDetails.cfm?SessionID=215089

      Stop by if you’re interested.

      • Anonymous says:

        “As Larry Wasserman has noted, the current peer review system was invented over 300 years ago. It no longer serves its original purposes in the modern era. We need a system that works for modern times. We should keep pre-publication peer review, but let it be directed by the authors for the purpose of authors improving their own work. No accept/reject decision by expert editors, AEs, anonymous referees. Let the future judge the quality of the work, and let other researchers comment openly in a post-publication peer review system. There are so many initiatives to improve the flawed current system. http://www.researchers.one is an initiative that I’ve been working on with Ryan Martin. We’re hoping it will roll out very soon — within a month or so.”

        Have you thought about it possibly being useful to have peer-reviewers be given the opportunity to earn co-authorship?

        Like i stated in several places in the comment-section: i wonder why peer-reviewers aren’t given co-authorship if their contributions have been so helpful. To me, this is warping science concerning credits and/or making sure the “best” people are being rewarded.

        Perhaps you could have open review (but do it anonymously so there will be no rewarding “friends” situations) where the comments are seen by everyone (and can therefore not be copied by others and/or “stolen” by the original authors without giving credit) and write up some “rules” concerning possible co-authorship, but let the original authors themselves (publicly) decide which comments are co-authorship worthy.

        Possibly think what that could do to improving peer-reviewing (and subsequently science):

        1) the reviewers would invest time and effort for a reason and for possible rewards (no more “free labour to only make the publishing companies rich”),
        2) the quality of review/comments could improve (everyone would want to try their absolute best in order to possibly earn co-authorship),
        3) the credit system in science would improve because the “best” reviewers would get credit in the form of co-authorship (hereby possibly making sure the “best” people will be rewarded)

        • Anonymous says:

          (some additional thoughts i had after re-reading Crane’s comment again)

          4) probably fewer reviewer comments that amount to “you should have done it this (read: my) way” or “you should cite this (read: my) work” (because these could easily be left for what they are/ do not “earn” authorship)

          5) probably fewer reviewer comments that are aimed to “take down” or “block” certain work, because there is no real “power”/”influence” for reviewers in this system (unlike the current one like you made clear already)

          • Harry Crane says:

            I agree that good reviewers should receive some form of “credit”, but co-authorship is way too much. There are different ways to make contributions. Authorship is one. Reviewing is another.

            We need a system where “credit” isn’t given for publishing/authorship but for the quality of the publication. Current model, obsessed with journal prestige, impact factor, etc., doesn’t have this. Similarly, reviewers can only receive credit for good work if they are able to do their job non-anonymously.

            As a general principle, coming up with “rules” for what counts as co-authorship and what doesn’t will lead us down the same path we are in now. Having a small number of rule determined by a governing body strips authority away from authors into the hands of academic bureaucrats (editors, etc.) Clearly defined “rules” are also easily gamable, and will be taken advantage of in the same way the current system is taken advantage by career-oriented academics.

            Ultimately, the various communities will decide what gets credit and what doesn’t. Not for me to say. But at least we can have a reasonable, minimal framework that allows scientists and researchers to be free of the devastating sociological forces of academia and publishing.

            • Andrew says:

              Harry:

              One thing I’ve written about many times is the idea of the division of labor: A researcher or research team should be able to get credit for designing a cool experiment, without the implicit rule that to publish it in a good journal they need to have a statistically significant, successful result. Similarly, a researcher or research team should be able to get credit for collecting useful data, without the implicit rule that to publish it in a good journal they need to have a statistically significant, successful result. Some people are good at designing experiments, others are good at collecting data, others are good at theory, others are good at analyzing and presenting data. The implicit requirement that all these things go together, has I think wreaked havoc in various fields of experimental science.

              • Keith O'Rourke says:

                +1

              • Harry Crane says:

                Andrew:

                That’s a good point. I agree that co-authorships should be allowed to evolve and it would be great to have a system that allows collaborations to form organically in the way you described. But I don’t think this can be achieved with minimal ad hoc fixes to what we already have.

              • Martha (Smith) says:

                +2 to division of labor

            • Anonymous says:

              “I agree that good reviewers should receive some form of “credit”, but co-authorship is way too much.”

              Why is this way too much? (in the cases of truly “useful” reviewer contributions)

              I am pretty sure many co-authors listed on papers have done way less work to “earn” their co-authorship then a good reviewer does.

              Pick your favorite “rules”/”recommendations” for earning co-authorship and compare them to what a good/useful reviewer has done. I don’t see a difference between (many) co-authorship “rules” and the function of a reviewer…

              This was the 1st thing that google produced for my search about “rules” concerning authorship:

              http://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html

              “The ICMJE recommends that authorship be based on the following 4 criteria:

              Substantial contributions to the conception or design of the work; or the acquisition, analysis, or interpretation of data for the work; AND

              Drafting the work or revising it critically for important intellectual content; AND

              Final approval of the version to be published; AND

              Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.”

              Seems to me to be a lot of overlap between what a co-author does and what a reviewer does…

            • Anonymous says:

              1) “We need a system where “credit” isn’t given for publishing/authorship but for the quality of the publication.”

              The quality is what needs to be rewarded but this seems to me to come from/depend on scientists. High quality science does not magically fall out of the air. Therefore, in my reasoning you have to reward the scientists who are able/have shown to produce the highest quality work.

              These things go hand in hand in my reasoning, and influence each other. In my reasoning, credit should definitely go to individual scientists who have shown to be able to produce high quality science. This is another reason why i am puzzled that reviewers who have contributed to improving papers remain anonymous and/or don’t receive useful credit like co-authorship.

              2) “Similarly, reviewers can only receive credit for good work if they are able to do their job non-anonymously.”

              I don’t agree.

              In my example above i wrote that peer-review should be done anonymously, and only when the original authors decide the anonymous comments/review earns authorship the reviewer will be revealed to them (i reason this can be done in some way or form on a website where the review will be done in the open)

              3) “As a general principle, coming up with “rules” for what counts as co-authorship and what doesn’t will lead us down the same path we are in now. Having a small number of rule determined by a governing body strips authority away from authors into the hands of academic bureaucrats (editors, etc.) Clearly defined “rules” are also easily gamable, and will be taken advantage of in the same way the current system is taken advantage by career-oriented academics.”

              I don’t agree.

              In my idea briefly described idea above you would/could have reasonable “rules” which have more or less decided authorship in the last decades.

              Furthermore, if you don’t like these reasonable “rules” of the past decades you would/could even have the authors themselves decide on their own “rules” upfront. Just as long as it is written down/made clear for everyone so they can decide whether they agree with this and want to review things.

              Moreover, you would/could even have the original authors themselves have the final say in whether or not they want to reward a review with authorship and make that clear to every reviewer. They then decided whether they “trust” the original authors to do the right thing. I don’t think the original authors will “steal” any anonymous comments without rewarding it with co-authorship because it’s all out in the open in my idea. You would/could for instance, make the original authors post the to be peer-reviewed paper AND the final manuscript on the open peer-review page (where all the comments are depicted) so it will be clear to everyone (e.g. all the reviewers) if any authorship should have been given.

              Anyway, just some more thoughts i had about this.

              Regardless, i can’t see any real other “rewards” aside from co-authorship that make sense to me or can be seen as important.

              If i am not mistaken, people can list their reviewer activities on their CV but that doesn’t make any sense to me. I am not even sure if the truth of these reviews can be easily verified by the reader of the CV, but more importantly the content of the review can almost certainly not be checked so it seems rather pointless to me concerning rewarding the “best” scientists. Any possible way to try and give “credit” for reviewers without the actual reviews being accessible to others therefore also seem to me to make little sense.

              I reason the actual review has to be open and accessible so others can read it and could therefore use it to reward the “best” scientists. Note that this is also the case in my idea written above, and could even provide an additional reason for reviewers to join such a project. This is because, even when they would not have been rewarded with co-authorship, they could link to their reviews on their CV (at first they would be anonymous reviews in my idea, but they could all be made non-anonymous when the review period is closed).

              • Harry Crane says:

                I see room for a lot of agreement here, but I would advocate a much more flexible system. Awarding reviewers with co-authorship is fine, if the researchers who are already authors on the paper decide that’s appropriate. It’s not for me to say, and I would caution against a general policy of “Do X Y Z and your name goes on the paper”. If I understand your point, you are not suggesting this last thing, which is good.

                I don’t know why you insist on anonymous peer review. Ideally there should be no “accept/reject” decisions by editorial boards. Once this mechanism is removed, many of the reasons for anonymous peer review go away.

                I do, however, believe that authors should have the ability to get pre-publication feedback privately — not all peer review should be open. Authors should have a chance to revise and improve mistakes in their paper before shown to the public. But authors should also have a chance to put their paper under public peer review, open for any to comment. And, of course, all papers should be subject to post-publication peer review.

                But non-anonymous communication is key to all of this. If a researcher is offering an opinion, they should be willing to stake their reputation on that opinion. In this way, they benefit when correct and face consequences for being wrong. It’s a natural quality control mechanism.

              • Anonymous says:

                1) “I don’t know why you insist on anonymous peer review. Ideally there should be no “accept/reject” decisions by editorial boards. Once this mechanism is removed, many of the reasons for anonymous peer review go away.”

                I value anonymous peer-review because the arguments, facts, logic, etc. should be what’s being evaluated not the person who states things. I believe anonymous reviews promote the evaluators focusing on the content, and not other things. This is one of the reasons i usually comment on this blog using the “Anonymous” name (plus the fact that i don’t want any “credit” because i don’t want to work in science so my name doesn’t matter anyway). I also believe many of today’s problems in science are related to this issue where reputations, and other things that have nothing to do with science, play a (scientifically negative) role.

                Also note that my idea/format can be applied to both the traditional journal/editor/peer-review model (my idea could then be the pre-pre publication peer-review) and/or could be (a step towards) a new model like your idea about no “accept”/”reject” decisions. Please note however that i reason that even in a no “accept”/”reject” model, anonymous reviews are still crucial because of the opportunity for co-authorship concerning which i reason anonymous reviews promote focusing on the actual content (and not for instance reward certain non-anonymous reviewers because they are influential tenured professors).

                2) “But non-anonymous communication is key to all of this. If a researcher is offering an opinion, they should be willing to stake their reputation on that opinion. In this way, they benefit when correct and face consequences for being wrong. It’s a natural quality control mechanism.”

                I agree concerning the benefit when correct and face consequences for being wrong, however this should be determined fairly which is exactly what has gone wrong with science in the past few decades in my reasoning. That’s also exactly why i favor the 2-step model of first anonymous comments (which i reason promote focusing on the content not the person), and in the final stage making every review public.

                Please note that this is still in line with your “natural quality control mechanism” as far as i can reason AND in line with my idea that, aside from earning possible authorship, the reviewers can link their now publicly accessible reviews on their CV. They have 2 reasons to join such a project: possible co-authorship, and building a publicly accessible reviewer history which they can use on their CV, etc.

  8. Nat says:

    Seems like the review process would vary a lot depending on your objectives. The review process is not just about creating a curated database of trustworthy articles, it is also about giving useful feedback to researchers of all levels. The review process can also have effects on shaping the direction of research in a field.

    It also seems obvious that a binary classification of accepted vs. rejected is insufficient when you consider the “quality” of articles submitted is likely some multidimensional (multimodal? skewed?) distribution. Reviews of an article are important attributes, but there are likely other things we would want to quantify. The “quality” of reviewers also also seems like some complicated multidimensional distribution and depends on the scope of the submission.

    I think it would be worthwhile to brainstorm some unusual approaches to the problem outside of the traditional review process. For example, what if we had reviewer and writer ratings similar to buyer and seller reviews on eBay?

  9. Nat says:

    Andrew:

    I agree it is likely impossible, but it doesn’t mean it shouldn’t be a goal to strive towards. I am curious what goals you think the review process should achieve though.

    Should a component of post-publication reviews include some sort of scoring rubric? When I buy something on Amazon (e.g., a textbook), then I look at the overall rating and then look at the distribution of ratings across all reviewers. When I buy something on eBay, then I look at the seller’s rating and how many ratings they have. If an item and/or seller has few or no reviews, then that is also useful information because it conveys risk / uncertainty or that I buying something that few people are interested in.

  10. Dale Lehman says:

    It seems like you are painting too (two) stark options – the current system or a free for all with only voluntary post-publication review. I think editors should play an important role – more important than today, arguably. One key function is to screen submissions and only accept (and my preference would be a two step process – first accept for public review and then a final screen to choose only work that successfully passes the first stage) manuscripts that seem to study something worthwhile and potentially do it in a competent manner. Much of the useless research could be screened out if editors are focused on choosing only work that seems important. The initial open review period could function well to eliminate much poorly conducted research (“could” should be compared with the likelihood that such open review would work better than today’s refereeing process). The final determination of the editors would be based on all of the comments submitted during an open period of time. If few – or no – reviews are submitted, then the editors will have to consider whether this is an indication that the work was not really very interesting, or just failed to elicit anybody’s interest. If the latter, the editors might have to review the work themselves.

    • Anonymous says:

      “One key function is to screen submissions and only accept (and my preference would be a two step process – first accept for public review and then a final screen to choose only work that successfully passes the first stage) manuscripts that seem to study something worthwhile and potentially do it in a competent manner. Much of the useless research could be screened out if editors are focused on choosing only work that seems important.”

      Leaving aside your proposal of open review for now, i don’t think this will work for the same reasons it hasn’t worked thus far. I fear what you call “useless research” has been thought of as being highly relevant, possibly influential, theoretically interesting, etc. by editors of the journals which published this “useless research”.

      Although i like the idea of open review (if it can indeed be truly open), i still reason this could result in major issues that have probably played a role in the traditional journal/editor/peer-review model where

      1) some random editor decides what is “useful” by him/herself (or in a small group),
      2) friends giving each other “good” reviews (which probably has been done in the past, and is still being done now) and/or
      3) editors picking what they view as the “best review/comments” (or should i say picking their friends’ comments).

      Perhaps (truly) open review should then at least go together with anonymous reviews (anonymous papers for the reviewers, and anonymous comments/reviews for the editor). But this still doesn’t solve the issue of giving power/influence to a single editor (or a small group) who determines what is an “important contribution” in the 1st place.

      Altogether i am still puzzled by proposals which amount to what i view as only slightly different versions of the traditional journal/editor/peer-reviewer model that do not seem to me to tackel the real issues.

      Why should scientists waste their time peer-reviewing other’s work and/or work that some random editor deems “useful” ?

      Like i stated above, reviewing in my reasoning consists of reading and possibly using (parts of) papers for your own work, or not. If, and how, you use papers is the (post-publication) review in and of itself.

    • Martha (Smith) says:

      “Much of the useless research could be screened out if editors are focused on choosing only work that seems important.”

      I doubt that many editors would have broad enough knowledge and insight to be able to do this well — in particular, since editors all have limitations in their backgrounds, it is likely that a lot of worthwhile research would be screened out along with stuff that deserves to be screened out.

  11. Nat says:

    Another interesting model for for post-publication reviews would be something like movie reviews on Rotten Tomatoes because they give one score from “top critics” and one from the “audience”. Classifying publications as “rotten”, “fresh”, or “certified fresh” would also be more fun…

  12. I started writing a comment, but it grew too long and so I turned it into a blog post, with some thoughts on how post-publication review will fail, for the same reason that one doesn’t want to be lying injured on a busy city sidewalk. The vague notion that “people” will critique papers post-publication doesn’t match the reality of how people behave.

    • Anonymous says:

      From your blog post:

      “As mentioned, I spent a few hours reviewing a paper yesterday. I don’t know the editor (in fact, I didn’t bother reading who the editor is), though I’m familiar with the journal. I am, essentially, a stranger. However, as I do roughly once a month, I spent several hours thinking carefully about an article, carefully checking (and critiquing) its arguments, and writing hopefully helpful statements that would improve it, all for no personal benefit other than a vague feeling that it helps the field. “

      But that’s just it, in my reasoning you *do* have a personal benefit if you are (possibly) going to use the paper for your own work. As i stated above:

      1)Why should i waste my time peer-reviewing (in the traditional journal/editor/peer-reviewer model) other’s papers?
      2)Post-publication review in my reasoning consists of using (parts of) papers for your own work, or not. If, and how, you use papers is the post-publication review in and of itself.

      If you read papers, and decide to use or don’t use (parts of) them for your own work, aren’t you in a way performing post-publication peer-review? If so, isn’t everyone already performing post-publication peer-review?

      • Your point is a good one, but I think it hinges on what’s meant by “[are you] going to use the paper for your own work?”

        Suppose a paper describes some property of some protein. Certainly if I’m planning on using that protein in an experiment, or if I care about measuring that property in my experiments on a different protein, I’m going to “use” the paper. This, however, is very rare — when it happens, I of course read the paper carefully and perform my own detailed assessment.

        But what if I file away the information from that paper to aid my general understanding of proteins, or to be aware of similarities between proteins I care about and this protein, to help motivate the context of experiments? What if, describing my experiment, I note that the protein I care about is structurally a bit similar to the protein in that paper, and so it would be interesting to think further about this. Am I “using” that paper? I would say yes! This scenario is how the vast majority of papers I look at are “used,” and while occasionally I write up thoughts on them, I usually don’t, and nor does anyone else, for the reasons I wrote about.

    • Nat says:

      I think people might critique papers post-publication if it requires a minimal amount of effort. Online marketplaces (e.g., Amazon, eBay) and review sites (e.g., Goodreads, Rotten Tomatoes) collect tons of ratings and reviews for free. Question and answer sites (e.g., Stack Overflow) also offer personalized technical feedback for free. In some sense, comments on blog posts are also a type of review of the published content. The ones doing the work may be a minority compared to those who benefit, but the basic model seems to work.

      If you want in-depth reviews from experts then you likely need some sort of incentive. Some incentives might be membership access conditional on community engagement or the prestige from being recognized as a top reviewer.

  13. Lord says:

    Seems like the answer is to turn the review itself into a published article with its own citations.

  14. Boris Barbour says:

    It may be worth reminding people about https://peeriodicals.com

  15. If you still believe reviews are good at separating the wheat from the chaff, see John Langford’s ACM blog post on the NIPS Experiment. Corrinna Cortes and Neil Lawrence were program chairs and created two independent review panels for a number of papers, mirroring the main conference organization. The results, to quote John Langford,

    Let’s give P (reject in 2nd review | accept 1st review) a name: arbitrariness. For NIPS 2014, arbitrariness was ~60%. Conclusion?

    And just to be clear, NIPS is reviewed pretty much like a journal and NIPS publications drive hiring, tenure, and promotion in machine learning in academia and industry.

  16. Torquemada in Training says:

    I have reviewed and edited papers, and my takeaway from that experience is that Sturgeon’s Law (ninety percent of everything is crap) is literally true. Never mind the science: pointless, ill-conceived, unoriginal, poorly sourced, under- or idiosyncratically- analyzed, uninformative graphics, but on top of that too much was written at a level of competence that a third grader would laugh at. In this I am in sympathy with Jonathan Baron. Anonymous follows with “…the majority of scientists working in academia (and submitting to your journal) are apparently unable to produce papers that are deemed publishable. Instead of praising (the role of) super journals/editors/reviewers, perhaps some more attention should go to training these scientists to be able to produce publishable papers to begin with.” I regard the first sentence as a statement of plain fact. The second sentence is one of the oldest jokes in academia. Anonymous continues: “I assume the revisions are because of editor/reviewer comments. If correct, why aren’t these super-editors and/or reviewers made co-authors then? They apparently may have done more concerning scientific contributions than the original authors themselves… I mean, perhaps all these researchers with “good” papers on their CV’s were actually only made “good” by anonymous editors and/or reviewers…” Sir, take out “I assume” and “apparently may” and replace “perhaps all” with “many” and you have an accurate picture of commercial publishing.

    • Anoneuoid says:

      Is the lack of proper use of paragraphs in your post some kind of meta-comment?

    • Anonymous says:

      1) “I have reviewed and edited papers, and my takeaway from that experience is that Sturgeon’s Law (ninety percent of everything is crap) is literally true.”

      But if that is true, i repeat what i said above: more attention and efforts should go into training scientists to be able to produce publishable papers (or better select for the one’s who are able to do this).

      Also, assuming peer-review is done by other scientists, can you explain your reasoning to me why scientists are apparently unable to produce publishable papers in their roles as authors but apparently do possess the appropriate skills in their roles as reviewers…

      2) Anonymous continues: “I assume the revisions are because of editor/reviewer comments. If correct, why aren’t these super-editors and/or reviewers made co-authors then? They apparently may have done more concerning scientific contributions than the original authors themselves… I mean, perhaps all these researchers with “good” papers on their CV’s were actually only made “good” by anonymous editors and/or reviewers…” “Sir, take out “I assume” and “apparently may” and replace “perhaps all” with “many” and you have an accurate picture of commercial publishing.”

      Now i get why science is in such a bad state! You may have not rewarded the most competent people (i.e. the anonymous reviewers who apparently are responsible for many of the “good” papers that have been published), but rewarded the sub-par scientists who submit their “bad” work to the journals only for the editors/reviewers to “fix” the papers…

      Again, perhaps it is a good idea to reward these super-reviewers with co-authorship so science can see, and hopefully reward, those scientists that actually are able to produce “good” work!

  17. JP de Ruiter says:

    As far as I know, editors for commercial publishers *do* get paid.

    • Anonymous says:

      “As far as I know, editors for commercial publishers *do* get paid.”

      Wait, what!?!?

      Is this true, and if so do you have an estimation of how much money this involves?

  18. Terry says:

    I wouldn’t underestimate the usefulness of having “grey hairs” in the profession giving their opinions of papers.

    Having an author present a paper at a good university with gray hairs in the audience can’t be matched for insightfulness per minute.

    • Anonymous says:

      “I wouldn’t underestimate the usefulness of having “grey hairs” in the profession giving their opinions of papers. “

      If you mean tenured professors/senior scientists, then i do not necessarily agree.

      Just take a look at all the papers submitted (which to some are apparently 90% cr@p), and spot the tenured professor/senior scientist’s co-authorship (or should i say gift-authorship?) on them. I assume they gave their opinion on these papers…

  19. Torquemada in Training says:

    Re paragraphs: mea culpa, and that there is some irony, friends.

    Anonymous asks “…why scientists are apparently unable to produce publishable papers in their roles as authors but apparently do possess the appropriate skills in their roles as reviewers…” I submit that these are largely non-overlapping sets. Competent reviewers/editors are as rare as competent statisticians, which this blog leads me to believe are very rare indeed.

  20. Nat says:

    Research papers are often cited on this blog and discussed as a type of “post-publication review”. Why not start that discussion on a post-publication review site and encourage blog readers to join you? A cross-posting might help build the critical mass needed for post-publication review to work.

    I joined ScienceOpen. It seems OK. It has ratings, reviews, and comments.

Leave a Reply