Kent Holsinger points to this post by Kevin Drum entitled, “Publication Bias Is Boring. You Should Care About It Anyway,” and writes:
I am an evolutionary biologist, not a psychologist, but this article describes a disturbing Scenario concerning oxytocin research that seems plausible. It is also relevant to the reproducibility/publishing issues you have been discussing recently on your blog.
You all know about publication bias, don’t you? Sure you do. It’s the tendency to publish research that has bold, affirmative results and ignore research that concludes there’s nothing going on. This can happen two ways. First, it can be the researchers themselves who do it. In some cases that’s fine: the data just doesn’t amount to anything, so there’s nothing to write up. In other cases, it’s less fine: the data contradicts previous results, so you decide not to write it up. . . .
This is just fine but I want to emphasize that publication bias is not just about the “file drawer effect,” it’s not just about positive findings being published and zero or negative findings remaining unpublished. It’s also that, within any project, there are so many different results that researchers can decide what to focus on.
So, yes, sometimes a research team will try an idea and it won’t work and they won’t bother writing it up. Just one more dry hole—but if only the successes are written up and published, we will get a misleading view of reality: we’re seeing a nonrandom sample of results. But it’s more than that. Any study contains within itself so many possibilities that often something can be published that appears to be consistent with some vague theory. Embodied cognition, anyone.
This “garden of forking paths” is important because it shows how publication bias can occur, even if every study is published and there’s nothing in the file drawer.