PubMed Commons: A system for commenting on articles in PubMed

Rob “Lasso” Tibshirani writes:

We all read a lot of papers and often have useful things to say about them, but there is no systematic way to do this ­ lots of journals have commenting systems, but they’re clunky, and, most importantly, they’re scattered across thousands of sites. Journals don’t encourage critical comments from readers, and letters to the editor are difficult to publish and given too little space. If we’re ever going to develop a culture of commenting on the literature, we need to have a simple and centralized way of doing it.

Last year, I [Tibshirani] approached my Stanford colleague Pat Brown, a founder of PLOS, with the idea of creating a site where scientists could comment on ANY published research article ­ something like comments on movies at Internet Movie Data Base (IMDB) or comments on books and other products at Amazon. Pat said that he been discussing similar ideas with his PLOS co­founder Michael Eisen, and that they felt strongly that a standalone site would be unlikely to work because it would not get enough traffic. They felt that the best way to develop a successful culture of commenting on science papers would be to make this an option at PubMed. Pat introduced me to David Lipman, the Director of the NCBI (the home of PubMed), who said that the idea has been raised many times in the past, and that he was open to implementing such a system if I could demonstrate broad support in the community.

So I organized a group of 34 team leaders, representing diverse scientific fields. They recruited teams of prominent researchers in their fields ­ 250 in all, who were committed to the idea. David took the idea to the NIH leadership, who approved the development of a pilot commenting system called PubMed Commons. The team of scientists I assembled agreed to beta test the system during development and to provide feedback on its design and operation.

Who should be able to post comments?

A central issue for PubMed Commons was the question of who should be able to post comments. One would like the system to be inclusive as possible but many scientists would not be interested in posting comments in a system with a high proportion of irrelevant or uninformed

comments. NIH also needed a rule for who could post that would be pretty clear cut and not based on e.g. some judgment of the experience or knowledge of the participants. The decision was made that comments could only be posted by authors of papers in PubMed. This would make the situation symmetric in that all people who comment can have their own work commented on. It would also include a large number of potential participants and would meet NIH’s need for something unambiguous. Unfortunately it would leave out many people who could add valuable input, including many graduate students, patient advocates, and science journalists. I’m a little worried about this restriction, as I want to make the system open to as many users as possible. But hopefully that is a pretty wide net, and it may be widened further in the future. And a group commenting feature to be described below could help improve inclusiveness.

Anonymous comments allowed?

One big issue that we have faced was the question of whether anonymous comments should be allowed. After much discussion, the group remained deeply split on this issue. Those wanting anonymous posts were concerned that many scientists, especially junior researchers, would be reluctant to make critical comments. But those opposed to anonymous comments believed that the quality of interchange would be higher if commenters were required to identify themselves. In the end, these differences weren’t really resolved and the decision was to start without anonymous comments and re­evaluate after the system had been fully public for a while. While debating this issue various proposals were put on the table for ways to allow participants to review and essentially sponsor the anonymous post of another participant.

Group comments

Gary Ward, an active member of the lead user group, was very keen on using PubMed Commons to post comments from a journal club for a class he participates in at the University of Vermont. He proposed that there should be some way for PubMed Commons to accommodate comments posted by a group. David Lipman noted that group comments would also be a way to allow participation by a wider range of commenters: A group could be initiated by a regular PubMed Commons participant (i.e. was an author of a paper indexed in PubMed), giving it a title, short description, and list of participants and then posting comments on their behalf. While a group comment could be submitted by a particular group member, in many cases, they would reflect the consensus of the group and such collective comments could be quite valuable.

PubMed Commons is here!

The NCBI team developed a working version of PubMed Commons earlier this summer and I posted the first comment in the closed pilot on June 17. Since then the user group has noted bugs and made a number of requests for modifications. Jonathan Dugan of PLOS labs pulled together members of the publishing world for strategic advice, and has provided many valuable
suggestions about the design of the system. The current system is pretty simple ­ after registering you’ll see the PubMed Commons landing page which has all the most recent comments and links for information on how to use the system. When you’re signed in you’ll see below each PubMed record a box for posting comments or replies to existing comments as well as a place to indicate that an existing comment or reply was useful. There are instructions for how to specify simply formatting of a comment and if you cite another PubMed record in your comment, there are links back from that cited paper to your comment.

We believe the system is now ready for a wider range of participants. If you’ve been funded by an NIH Extramural grant (or in the NIH Intramural program), NIH has the information it needs to get you into PubMed Commons automatically. Once you’re a registered participant, you can invite other published scientists to join. NCBI is investigating ways to open Commons up directly and automatically to more groups of published scientists but if new participants invite their colleagues, the network effect could broaden membership and expand participation dramatically.

The system will still be in a closed pilot mode where only registered participants can see the posted comments but NIH leadership will be evaluating the closed pilot with the hope of making all comments visible to all users of PubMed. All comments are covered by Creative Commons Attribution license (http://creativecommons.org/licenses/by/2.0/ ) and if the decision is to make the system fully public, NCBI will provide an API so that other groups (e.g. publishers or other information resources) can make these comments useful to the community.

I’m not really part of the PubMed world so I can’t comment on the specifics, but from here it looks like an excellent idea.

P.S. Further discussion here from Ivan Oransky who, like some of our commenters here, is unhappy that the new system will restrict the number of people who can comment. I understand this concern, but to me it still seems like a step forward to have commenting that is “official.” I worry that, now, editors and readers of journals such as Psychological Science can simply dismiss commenters as coming from bloggers etc., whereas if the comments are directly attached to the article, maybe they would be taken more seriously. I wouldn’t mind if my own articles had comments from others attached—although I guess I’d be upset if we ended up with the kind of thing we see in the comments sections of newspaper blogs.

In this blog, we’ve had excellent experiences with completely open commenting, almost no trolls at all, and often even the angry commenters have something useful to say. I don’t know how we’ve managed to be so lucky. But I suppose having “statistical modeling” in the blog title is a filter that keeps out the truly thoughtless people. If PubMed had a completely open comment system, I’d guess that most of the time things would go just fine, but maybe there’d be problems with hot-button issues such as vaccines (is that the medical equivalent of a blog post on the Israel-Palestine issue?) and maybe problems with sock puppets on any articles that are related to big-money drugs.

23 thoughts on “PubMed Commons: A system for commenting on articles in PubMed

  1. This seems like a great idea. I can imagine this being useful in the statistical literature. In that instance, I would imagine the ability to comment on any article in a set of journals (statistics journals, but not necessarily ASA), with ability to comment limited to ASA members (and the authors of any paper being commented on. Perhaps if you comment on a paper you’d have to add the author contact info, so the authors would be informed of the comment?). In this way, junk comments would be less likely and the ability to comment would be a benefit of membership. I’d allow anyone to read the comments, though.

    Even if a lot of the comments were of the form “did you really mean theta_i in formula 3.2, or should that be phi_i?” this might be of use to readers.

    The issue of anonymity is interesting; in my case, I post under a pseudonym because one way of interpreting my employment contract is that I should be running these things through corporate PR prior to posting if they relate to my areas of employment. Statisticians in industry are likely to have similar issues.

  2. Out of curiosity, which existing comment / collaboration or similar system has been successful in spite of disallowing anonymous comments?

    To me this idea seems too little, too late. By disallowing anonymous comments and restricting eligibility to the select coterie of PubMed authors they are needlessly and fatally crippling the project right from the start.

    • There are a lot of PubMed authors, so it seems like a reasonable first cut.

      There’s a huge problem in open anonymous commenting that you see on large sites. The only antidote that I know of is something like Slashdot or Stackoverflow use — community voting.

      Zach’s idea (below) would be like badges on such sites (like “maintainer” or “author”).

  3. I wish them the best of luck, but I’ll be surprised if this succeeds. I don’t think the reason journal-based commenting systems (either in print or online format) are little-used has anything to do with them being decentralized, or clunky, or not allowing long enough comments, or not allowing anonymous comments, or whatever. It’s that people have no incentive or professional obligation to comment. Well, that, and the fact that people who are inclined to make brief comments, ask brief questions about papers, etc., already do so via Twitter.

    • I’ll post comments this way. I already post lots of paper comments on my blog. As does Andrew.

      I think a lot of other people will, too. Why do people post answers on stackoverflow? Why do they review movies on IMDBb? Why did you respond to this blog post?

      The incentives seem to me lined up pretty well with reviewing. There’s pretty much zero incentive to act as a reviewer other than to (a) help shape the field, (b) out of guilt that you’re having other people review papers so you should do your share, and (c) a bit of networking or helping out a friend who asked you to review. Yet people still do it.

      And with online commenting, your comments persist, so you can get some credit for all the work from someone other than the author thanking anonymous reviewers and editors (maybe) thanking you for reviewing.

      • Yes, many folks who write blogs comment on the published literature. But blog authors are a very small minority of all scientists. And as for blog commenters, folks who comment are a small minority of those who read blogs. Further, the commenters on any blog typically comment at least in part because it’s a collegial, communal activity. It’s part of a conversation or series of conversations with the blog author and other active commenters. It’s hard for me to imagine the same sense of an ongoing community of conversing colleagues being associated with any particular paper on PubMed, or with PubMed in general.

        As for the analogy with reviewing, I don’t understand why you see the reasons for reviewing as aligned with the reasons to comment on PubMed Commons. The ability of post-publication comments to shape the field is quite low, since the paper is already published. As evidenced by, e.g., the fact that rebutted papers continue to be cited long after the rebuttals are published (links to data here: http://dynamicecology.wordpress.com/2012/10/11/in-praise-of-pre-publication-peer-review-because-post-publication-review-is-an-utter-failure/). As you note, there’s a professional norm that, if you’re submitting papers, you ought to do your share of reviewing. But there’s no such norm for post-publication commenting. And since nobody’s expected to provide post-publication comments on your paper, why should one feel any guilt about not providing post-publication comments on anyone else’s papers?

        I’m sure some folks, like you, will comment on papers in PubMed Commons. But I think people like you are rare, as evidenced by the fact that existing journal-based comment systems are little used. It seems to me that those systems, not blogs or IMbD or Stack Overflow or the pre-publication peer review system, are the closest existing analogues to PubMed Commons.

        But of course, this debate will soon be moot, as it concerns a completely empirical question to which we’ll soon have the answer. If in fact there are lots of people who want to write comments on published papers but haven’t done so because existing journal-based systems are fragmented, clunky, etc., PubMed Commons should be a big success (where by “lots of people” I mean “a sufficient number of people that a substantial fraction of papers attract worthwhile comments”). If in fact there are few people who want to do so, PubMed Commons won’t succeed.

      • What do you all think of this idea:

        http://docs.selectedpapers.net/intro.htm

        It’s basically a protocol for tagging reviews of papers where they happen (i.e., on blogs, social media sites, etc) which can then be complied by search engines for easy viewing. The proof of concept just supports papers in arXiv and PubMed and only supports reviews on Google+ and selectedpapers.net, but the idea is much more general….

        In principle, with this sort of system, if Andrew Gelman or Bob Carpenter decide to comment on a paper in one of their blog posts, all they would have to do is add some tags to the post and then, assuming the various repositories cooperate, anyone who went to look for that paper would find links to these comments at the repository. Couple that with an upvoting system at the repositories and you have a decentralized system that can adapt to changing fads (i.e., where people decide to comment on papers).

  4. Why not let everyone comment but also include a simple prominently featured option to filter comments so that only those by people who have published their own research appear? This can even be the default display setting and there can be an option to see comments by the masses. Maybe there can even be a feature where a select few comments by the masses that get enough ‘up votes’ can join the professional comments.

  5. It is a really good idea and Rob “tibs” has had many good ones before but this one likely will be much harder to make work (but maybe even much more important) – we _all_ need help getting less wrong and should _never_ discourage well motivated questioning and comments. Even if it might distract students ;-) or accelerate the learning of our colleagues beyond their comfort level ;-)

    Completely open access to the mike is problematic and very likely would make it fail. On the other hand their is a reason for me having to put that awkward question mark in my name and I have been banned by the Cochrane Colaboration’s Statistic Methods Group (SMG) for posting overly provocative comments (apparently life time with no recourse). Helpful is really in the eye of the beholder.

    I hope the details work through, I will contribute as I can in what gets worked out.

  6. See also discussion at RetractionWatch.
    From nearly 30 years’ experience with online discussions, I’d summarize:
    1) it is better to start restricted, and expand carefully, and with good moderation. Otherwise, the very experts you want go away. I’ve seen that happen in USENET groups, blogs, comments for newspaper or magazine articles.

    2) Software for efficient moderation, reputation management, and reading efficiency are still primitive. We’re not even back up to USENET KILLFILES. Simple voting, as in the real world can be gamed. (Imagine if Google worked by voting rather than page rank.)

    • I’m only up to 29 years, but I’ll concur.

      Very good point about gaming. If careers depend on comments, you’ll inevitably wind up with sock puppetry, just like on Amazon and Yelp reviews. I’m guessing the reason you don’t see that on Stackoverflow is that nobody’s career is in the balance over the results.

  7. Although this is another case where everyone has opinions, there are actually many opportunities for social sciences students to study online behaviors under various circumstances. Some data can be gotten just by examining existing systems, others will take creation of experiments.

    I counsel caution regarding voting systems. They can be useful, but I have seen absolute nonsense about science get massed thumbs-ups, while people who actually try to introduce real science get hammered, especially on contentious topics.
    Suppose a bunch of people simply vote against existence of conservation of energy. How much does their vote count?
    This is a place for serious work on online reputational systems. As in Google Page Rank, some votes count much more than others, but if rank just comes from commenting often, that may not be good.

  8. Sounds like a good idea in principle but closed, elite systems of any type don’t usually thrive. The key concern over who can post comments is correct but I would be less worried about irrelevance than about the site becoming a kind of LinkedIn job board instead of a discussion group. Hiring a monitor to filter out that sort of spam would be a solution.

    • PubMed is hardly closed or elite. It’s almost anything published in anything vaguely bio-related.

      Existing journal reviews and grant reviews seem even more closed, because the editors nominate reviewers rather than there being any kind of “open” process. Although there are a few exceptions, reviewing is mostly anonymous in my experience. I believe the “peer” in “peer review” is supposed to signal some degree of expertise in the reviewer. Of course, in practice, I can foist off a review on a grad student.

  9. Whether or not it would become a Linkedin is beyond me, but irrelevance, per se, is not the issue, it’s when it gets noisy enough to drown the signal, that those who actually provide the signal stop their voluntary participation.

    For any technical subject, there is some threshold below which it is hard to have reasonable discussions, even by a skilled expert who is trying. I have, for instance, seen long arguments by someone arguing that relativity was wrong, using high school algebra against a physics professor who teaches graduate relativity theory, who patiently explained it was hard to get anywhere without tensors. The former kept telling the latter he knew nothing.

    Much is innocent, but I have also seen (*) people who try to simply waste experts’ time, by politely but persistently asking numerous simple questions …
    and some kind knowledgeable person spends much time, going lower and lower in analogies, to explain.

    and it turns out that if you backtrack into older discussions, they’ve asked the same or equivalent questions over and over and been well-answered long before. Experienced moderators eventually see the pattern.

    (This is one of the reasons you really don’t want to discard many comments permanently, because that history is sometimes valuable reputation. Of course, rampant sock-puppetry and identity problems remain, i.e., such as Fox News sock puppetry.

    For an example of good technical discussion done well, see RealClimate.
    Technical commentary is posted and discussed, but moderated by a group of good scientists. The most frequent is Gavin Schmidt, who is an award-winning climate scientist @ NASA GISS … whose patience is amazing. This blog actually tries to convey and discuss science, and from time to time, other experts drop by and comment, or do a blog post on their own new paper and then answer questions.

    However, it does have the Borehole (which in geosciences is a deep hole), of which the last page is this..
    (One flaw of this software is that it does not backlink to the thread the comment was moved from. If you skim that, it may not always be obvious why a comment was placed there. In some cases, this is from the (*) effect seen above, and experienced people have seen the same Internet id’s say the same stuff over and over.

  10. Pingback: Friday links: why your work will never make the textbooks, ELA only semi-saved, crab monster poetry, and more | Dynamic Ecology

  11. Pingback: PubMed: da alleato a concorrente delle riviste? » Associazione Alessandro Liberati

  12. Hello,

    Thank you very much for this article. I am a newly graduated medical doctor from Jordan. In my medical school, not enough concentration was given to medical research. Therefore, I can confidently say that virtually all of my colleagues and I graduated without having the ability to critically appraise the literature. When I read an paper’s abstract and I like it, the only way for to know me whether the paper is good or not is to how many citations the paper got. Nothing better! Well, with the commenting environment supported, I can simply enter into PubMed and read whatever comments from the paper received. I can then tell whether a paper is good or not. This is why I believe that only professional people should be allowed to comment. I do not like that the situation will be like comments under a YouTube video.
    What is more exciting, in the longer term, after reading how experienced authors appraise the published literature, I believe that this would mean that I will be able to develop appraisal skills myself. I am really very excited by the idea and cannot wait to start its effects. In my humble opinion, I think that it will revolutionze research.

    Best regards.

  13. Pingback: When published papers become like YouTube videos: How comments can revolutionize research – PubMed Commons. | Blog of Moa'bite

  14. Pingback: When published papers become like YouTube videos: How comments can revolutionize research – PubMed Commons. - Blog of Moa'bite

Comments are closed.