Publication bias occurs within as well as between projects

Kent Holsinger points to this post by Kevin Drum entitled, “Publication Bias Is Boring. You Should Care About It Anyway,” and writes:

I am an evolutionary biologist, not a psychologist, but this article describes a disturbing Scenario concerning oxytocin research that seems plausible. It is also relevant to the reproducibility/publishing issues you have been discussing recently on your blog.

Drum writes:

You all know about publication bias, don’t you? Sure you do. It’s the tendency to publish research that has bold, affirmative results and ignore research that concludes there’s nothing going on. This can happen two ways. First, it can be the researchers themselves who do it. In some cases that’s fine: the data just doesn’t amount to anything, so there’s nothing to write up. In other cases, it’s less fine: the data contradicts previous results, so you decide not to write it up. . . .

This is just fine but I want to emphasize that publication bias is not just about the “file drawer effect,” it’s not just about positive findings being published and zero or negative findings remaining unpublished. It’s also that, within any project, there are so many different results that researchers can decide what to focus on.

So, yes, sometimes a research team will try an idea and it won’t work and they won’t bother writing it up. Just one more dry hole—but if only the successes are written up and published, we will get a misleading view of reality: we’re seeing a nonrandom sample of results. But it’s more than that. Any study contains within itself so many possibilities that often something can be published that appears to be consistent with some vague theory. Embodied cognition, anyone.

This “garden of forking paths” is important because it shows how publication bias can occur, even if every study is published and there’s nothing in the file drawer.

17 thoughts on “Publication bias occurs within as well as between projects

  1. All of Andrew’s criticisms are valid, but one major problem regarding the criticism of psych-type papers is that honesty in papers will be punished with rejection when one submits a paper. In practice, nobody, no matter how willing they are to do the right thing, can currently follow Andrew’s advice down to the last detail! Some things can be fixed without rejection, but the fact is that most studies yield ambiguous results, but journals want (and I am quoting an editor) “closure”. Andrew, maybe offer yourself as the chief editor of one of the big psych journals, and enforce top down policy change.

    • Should you not be aware of it already, there are now some journal that offer the “Registered Report” format (please see https://osf.io/8mpji/wiki/home/).

      If I understood things correctly, in that format research design, etc. is being looked at considering publication acceptance instead of, for instance, significant results. Ambiguous results are not a problem in that format, because the paper is accepted/rejected before the results are known.

    • “journals want (and I am quoting an editor) “closure”.”

      Yech. If you’re going to claim to be doing science, you’ve got to accept uncertainty as part of real life. “Closure” is something off in a fairy tale world. So sad that they do this.

  2. What about academics taking the lead and rewarding honest research attempts rather than insisting on published works in the journals that are insisting on “closure?” As long as we continue to play that game, little will change. Surely, we can evaluate the quality of research by actually looking at what people do – and how they do it – rather than relying on this flawed publication process.

    • My impression is that most researchers in psych, ling etc don’t even understand what the problems are, and don’t have the time or inclination to question their practices.

      Funding agencies can and do kill careers, and they often (at least in the EU) are heavily focused on publication+citation counts. I had an extension of a project rejected because they said I didn’t have enough publications (3 in a four year period), so now I make sure I have a lot of publications when I get funding. They mainly care about the number, is my impression. Similarly, university tenure cases are not necessarily decided by experts who can judge a person’s work. The lower the expertise of the evaluator, the more they depend on “objective” metrics, and this is where publication count is make-or-break.

      • Shravan
        Undoubtedly you are right. I’ve encountered similar things. I only want to point out that we (the collective “we”) have some power to change these circumstances ourselves. I’m growing tired of passively accepting the practice of counting publications, ignoring quality, or substituting publication numbers for measuring quality due to “lower expertise of the evaluator.” If lack of expertise is a valid excuse for resorting to publication counts, then we have no business pretending we can instruct students at all. Let’s start embracing the role that we can judge quality – we will make mistakes but that is no different than the research we do anyway.

    • Shravan has an accurate picture of the academic areas I was in (now I was forced out of the last clinical research institute I was in for likely for not fitting in with a couple of senior members, but I was slightly below the 3 or 4 in past 4 years and had I not been, I likely could have insisted on staying.)

      There are efforts underway to correct these things e.g. Frank Miedema, Dean and Vice Chairman, The Executive Board, University Medical Center Utrecht – Improving research quality by proper incentives and rewards https://youtu.be/Xea-tZzKrQ8
      (Also see Session VI: Charting the Course – Exploring Top Proposals from Poster Sessions https://youtu.be/d7HM1R7q0oE )

      But it will take decades or more :-(

  3. Maybe the solution is to keep writing papers as we do, but to include the entire R script & output/SPSS log/whatever as an online appendix. Those who are really interested can go look at that. The paper is just an advertisement.

Leave a Reply

Your email address will not be published. Required fields are marked *