“Like a harbor clotted with sunken vessels”

After writing this post on an error in one of my published papers, I got to thinking about the general problem of mistakes in the scientific literature.

Retraction is not a serious solution to the problem.

And there are lots of people out there who simply refuse to admit, let alone correct, their published errors: I’m thinking about the authors of the papers on beauty and sex ratio, ovulation and voting, ovulation and clothing, fat arms and political attitudes, embodied cognition, himmicanes, air rage, power pose, ESP, ages ending in 9, the pizzagate dude, that “gremlins guy” . . . the list is endless.

I continue to think that post-publication review is the way to go. But in the meantime when doing science or science reporting we need to navigate through a literature that is full of the published and publicized articles that are flat-out wrong, including many studies such as the beauty-and-sex ratio analysis that never had a chance of providing any useful information, for the usual kangaroo reason.

20 thoughts on ““Like a harbor clotted with sunken vessels”

  1. This is a perfect place for machine learning (ML) and big data. Every paper could have its progeny checked – I mean papers that reference the original one. It would create a timeline (really, a branching bush) that includes evaluations of its validity, using blogs like this one and any other post-publication review posts to assess the validity.

    In this way, when I find one of your papers five years after publication, I could get an immediate idea of how well it has held up, and how reliable others in the field think it is.

    • For the most part one knows what is valid/correct or not since NHST is being used for that assessment. You need to find a descendant paper that makes a (precise enough to matter) prediction, then a later descendant paper that checks it.

  2. While the current situation is appalling it’s important to remember that the reason we’re appalled is that there are people brave enough to take hard looks at the methods of others.

    Having lost both grandmothers to breast cancer and with my mom currently on tamoxifen I’m keenly interested in research for a cure. A few years ago it became obvious to anybody who dared to look that a great number of researchers weren’t verifying their cell lines and so were often proposing new breast cancer treatments based on studies of non-breast, non-female and sometimes even non-human cells. I assumed papers would be retracted and funding would be diverted elsewhere. No such luck. A quick search for the newest on this issue at PubMed turns up a gem: a melanoma cancer line from a male was misidentified as a breast cancer line years ago; it was reliably demonstrated as such almost a decade ago; and, it’s still “described as a breast cell line in 56% (123/221) of recent publications between 2013 and 2016”. (http://onlinelibrary.wiley.com/doi/10.1002/ijc.31067/full By the way, the forensic story set out in this paper is fascinating as is the loss of the Y chromosome following decades of cell culturing).

    According to the authors of this paper out in print last month but online last October, their search of PubMed returned a total (1982 – 2016) of 1,205 breast cancer papers referencing this male melanoma cell line. I just reran the query. The total has since risen by four dozen.

    As we marinate in our disgust with mindless science we can at least be grateful for those who bring it to our attention.

  3. One idea that I’ve heard a couple times is for a StackOverflow for papers. Some base pool of papers would start on the site. If an author wanted to upload/link their paper, they would need to review at least two papers (at least one of which that hadn’t been reviewed before, and none of their own papers), perhaps with constrained, objective criteria (are methods valid?) as well as open comments. People could also up/downvote reviews. Authors with strongly-reviewed papers and reviewers with well-rated comments could have more influence in the community.

    There could be problems between rival labs (I imagine those exist, though not really in my corner of academia) earning strong support from part of the community and burning each other, but with enough reviews hopefully those would come out in the wash.

    Ideally, this would add useful information for researchers beyond citations and publication. As a biostatistician, when I’m looking into a problem in with an unfamiliar context or methodology, it would be nice to easily find influential papers and new developments without worrying about the validity of each paper. This would also be useful for providing criteria for determining which papers to include in a meta-analysis.

    • I don’t think the problem with this is malfeasance in comments (it is a rare academic that does not have something to complain about, and people should critically review the critiques same as the original paper), it is that there is no incentive for people to write reviews. What do I gain as a public reviewer commenting on papers?

      PubMed recently closed down their comment section, https://retractionwatch.com/2018/02/02/pubmed-shuts-comments-feature-pubmed-commons/. Some journals like Sociological Science have comment sections which basically no-one uses. (For an anecdote the other way critiques of biological images on PubPeer seems to have reasonable buy-in.)

      Given the current way things work, I think the easiest step forward is not post-hoc reviews, but making the initial peer-reviews public (and not anonymous). That way everyone can see the often silly sausage making in peer review. Ultimately people need to use their own judgement though in understanding the quality of whatever work. Perfect world that would go along with a central repository with the open source paper, so you have critiques and the draft all in one place.

  4. I wonder where academic publishing went wrong. If you read journals from the 19th century (or in some cases into the 40s and 50s) many of the articles were basically letter chains, where someone made a point, someone else responded, and then there were responses to the responses. At some point, the give-and-take inherent to scholarly inquiry seems to have been lost.

    • The same place mass higher education went wrong. Things never scaled well once we decided that academia wasn’t just for the elites. When it was just for the elites, there were plenty of other problems. But scaling things up created much of the mess we have now. I’d say that things were always wrong, though perhaps in different ways. We never identified what the problems were (how to measure quality – of students, of research, of teaching) so it isn’t surprising that we have issues now.

      I remember early in my career meeting a famous retired mathematician who was observing my job search headaches – he said that in his time, nobody searched for positions, they were nominated by other academics. I envied those days, but of course that system had plenty of problems of its own (though different from the current academic job market).

  5. What is “post-publication review”?

    Isn’t every citation a form of post-publication review (at least to the extent that there’s a comment on the paper rather than just a glancing reference)? Certainly review articles are post-publication review. So are blogs.

    You mean something formal?

    • I wouldn’t call citations a form of post-publication review; citations can be for a variety of reasons, and there is no guarantee that the person citing has even read the paper in any detail. A review involves some discussion of the merits (or lack thereof) of the paper.

  6. In the clinical research areas I worked in no one could sort out the floating versus sunken vessels.

    Essentially the authors learned to follow the guidelines for _good_ reporting of clinical studies in their submissions but there was no way to assess what they really did versus how they miss-perceived or miss-represented that in their papers.

    So most of them appeared to be well done and sensibly analysed but, for instance in cases where I was given access to the raw data and study materials, many had serious deficiencies.

    Given these were selective samples, no one knows how prevalent this actually is.

    If you only have access to the published paper there seems little to know in a given case.

      • My search function seems to be case sensitive: Earlier I tried both “kangaroo” and “kan”, with no hits. So after your reply, I tried both of those again, with no success. So I tried “oo” and got Kangaroo, which I then also got by searching for Kan or Kangaroo.

Leave a Reply to psyoskeptic Cancel reply

Your email address will not be published. Required fields are marked *