The Publicity Factory: How even serious research gets exaggerated by the process of scientific publication and media exposure

The starting point is that we’ve seen a lot of talk about frivolous science, headline-bait such as the study that said that married women are more likely to vote for Mitt Romney when ovulating, or the study that said that girl-named hurricanes are more deadly than boy-named hurricanes, and at this point some of these studies are almost pre-debunked. Reporters are starting to realize that publication in Science or Nature or PPNAS is, not only no guarantee of correctness but also no guarantee that a study is even reasonable.

But what I want to say here is that even serious research is subject to exaggeration and distortion, partly through the public relations machine and partly because of basic statistics. The push to find and publicize so-called statistically significant results leads to overestimation of effect sizes (type M errors), and crude default statistical models lead to broad claims of general effects based on data obtained from poor measurements and nonrepresentative samples.

One example we’ve discussed a lot is that claim of the effectiveness of early childhood intervention, based on a small-sample study from Jamaica. This study is not “junk science.” It’s a serious research project with real-world implications. But the results still got exaggerated. My point here is not to pick on those researchers. No, it’s the opposite: even top researchers exaggerate in this way so we should be concerned in general.

What to do here? I think we need to proceed on three tracks:
1. Think more carefully about data collection when designing these studies. Traditionally, design is all about sample size, not enough about measurement.
2. In the analysis, use Bayesian inference and multilevel modeling to partially pool estimated effect sizes, thus giving more stable and reasonable output.
3. When looking at the published literature, use some sort of Edlin factor to interpret the claims being made based on biased analyses.

The above remarks are general, indeed it was inspired by yesterday’s discussion about the design and analysis of psychology experiments, as I think there’s some misunderstanding in which people don’t see where assumptions are coming into various statistical analyses (see for example this comment).

A big big problem here, I think, is that many people seem to have the impression that, if you have a randomized experiment (or its quasi-randomized equivalent), then comparisons in your data can be given general interpretation in the outside world, with the only concern being “statistical significance.” But that view is not correct. You can have a completely clean randomized experiment, but if your measurements are not good enough, you can’t make general claims at all. Indeed, standard methods yield overestimates of effect sizes.

And, again, this is not just a problem with junk science. Naive overinterpretation of results from randomized comparisons: this is a problem with lots of serious work too in the human sciences.

10 thoughts on “The Publicity Factory: How even serious research gets exaggerated by the process of scientific publication and media exposure

  1. There have been numerous examples in running related studies in recent years, perhaps one of the more egregious being back in 2015, where the over-interpretation of the study findings suggested that the strenuous exercise was possibly worse for your health, or at least no better, than being a “couch potato”.

    The original paper is here:

    http://www.sciencedirect.com/science/article/pii/S0735109714071745

    The key conclusion statement being:

    “The findings suggest a U-shaped association between all-cause mortality and dose of jogging as calibrated by pace, quantity, and frequency of jogging. Light and moderate joggers have lower mortality than sedentary nonjoggers, whereas strenuous joggers have a mortality rate not statistically different from that of the sedentary group.”

    It took two months after the publication and the high profile media headlines, for the lead author, in an interview with the BBC, to retract the overblown conclusion regarding strenuous exercise. Acknowledging that the study had not, in fact, demonstrated this finding, that the sample size was too small and there were other confounding issues:

    http://www.bbc.com/news/magazine-32160231

    The lead author, in the BBC interview, suggests that “experienced readers of research papers” should have recognized this, but as we all know, once the media gets a hold of this, the lay public has no such experience, thus the great potential for negative impact.

    It was also interesting in that one of the paper’s co-authors has a history of publication bias against strenuous training for running (e.g. marathons, etc.) and this paper was yet another example. Thus, in certain circles, the findings were met with immediate concern.

  2. I sometimes see researchers supporting their argument by referring to studies that are actually about something else.

    For instance, in a NYT op-ed, a UNC professor asserts that *popularity* makes you live longer. In support of this, he cites two studies that are actually about social integration and longevity. But they are about social integration and longevity, not popularity and longevity. Integration and popularity are not the same.

    In addition, there’s a blatant discrepancy. He says: “Dr. Holt-Lunstad found that people who had larger networks of friends had a 50 percent increased chance of survival by the end of the study they were in.” But Dr. Holt-Lunstad herself says about her meta-study, “Across 148 studies (308,849 participants), the random effects weighted average effect size was OR = 1.50 (95% CI 1.42 to 1.59), indicating a 50% increased likelihood of survival for participants with stronger social relationships.” How did “stronger” turn into “larger”?

    He must have had reasons for choosing to work with the concept of popularity. But having chosen this topic, he should cite research that discusses it.

    The book may be better than the op-ed, but the title (Popular: The Power of Likability in a Status-Obsessed World) increases the longevity of my doubts.

  3. Andrew,

    You suggest 3 actions, but these are for producers of papers and press releases spruiking these papers. The problem is that the penalties for over hyping are close to zero but the benefits are high. Publish or perish. Overhype and get more funding and citations.
    The sellers obtain benefits from these games.

    In the commercial world sellers would love to succeed in selling overhyped rubbish. Buyers can be conned for a while but because they are spending their own money they usually find ways to determine reality.

    Isn’t this the challenge for consumers of research? Some kind of sanctions? If the only determinant for sellers is getting published and getting cited with no penalties for what turns out to be junk or hype, then your recommendations sound wonderful but will be completely ineffective.

    • Steve:

      Not completely ineffective, I hope. There are lots of researchers out there trying to do their best, and some statistical advice could make a difference. A lot of these messages never get taught.

  4. But what I want to say here is that even serious research is subject to exaggeration and distortion, partly through the public relations machine and partly because of basic statistics. The push to find and publicize so-called statistically significant results leads to overestimation of effect sizes (type M errors), and crude default statistical models lead to broad claims of general effects based on data obtained from poor measurements and nonrepresentative samples.

    It might be helpful to think about who is the intended consumer of these psychological studies.

    For medical research, there are boards tasked with reviewing the research and making recommendations and setting guidelines for practitioners. Such boards can see through a lot of the PR hype. In other serious fields, other researchers are the consumers and the field can come to some sort of consensus about what is established in that field. Again, these are well-informed consumers who can filter out the PR hype.

    But, for these goofy PNAS studies, it seems that the intended consumers are the general public. So PNAS-style studies are more akin to Oprah’s latest diet tips rather than serious scientific research. I agree that horrible biases are almost impossible to avoid here because the most hyped studies get the most attention, and solid studies are almost completely ignored because real results are almost always very modest contributions. Plus, the general public is not able or willing to sort through the junk. So, perhaps we should just give up and consign PNAS-style results to Oprah and not get so upset.

    This might solve the problem if it were only PNAS that was publishing goofy research. But, some fields are prone to goofy fads and those fads have serious consequences. Education and the self-esteem fad comes to mind. http://www.unz.com/isteve/the-great-self-esteem-craze-of-the-late-20th-century/. Education seems to be peculiarly susceptible to fads. Why? Perhaps because education policy is often set by politicians who are not interested in the science? Or maybe because goofy activists seem especially attracted to education (this seems to be the case with the self-esteem fad). What do we do about those fields?

    • Terry:

      I know what you mean. On the other hand, I was contacted not long ago by someone connected with PNAS who seemed to be genuinely upset that the journal was publishing so many obviously bad papers.

      By comparison, one good thing about Plos-One—or, for that matter, Arxiv—is that they don’t pretend to be selective. It’s just part of the plan that they’ll publish the bad with the good, so it’s not so embarrassing when the bad stuff comes out. When Technology Review, for example, hypes some crappy preprint from Arxiv, we can say the problem is Technology Review’s for falling for it; we don’t blame Arxiv.

    • Terry said: “For medical research, there are boards tasked with reviewing the research and making recommendations and setting guidelines for practitioners. Such boards can see through a lot of the PR hype. In other serious fields, other researchers are the consumers and the field can come to some sort of consensus about what is established in that field. Again, these are well-informed consumers who can filter out the PR hype.”

      I’m not convinced that the review boards can see through a lot of the PR hype — and even if they do, what seeps down to the practitioner may be overblown (e.g., the cardiologist who said, “If we could clone you and you took this medication but your clone did not, you would live longer than your clone” has confused an average effect with an individual effect.)

      I’m also not convinced that researchers in other serious fields can filter out all the PR hype; they may come to “some sort of consensus about what is established in that field”, but that consensus may be based on misunderstanding of the strength of the “findings” on which it is based.

      • I heartily agree that “serious” researchers do not filter out all the PR hype – publish or perish incentives corrupt even the best fields, and there are skillful charlatans everywhere. There also seem to be some fields that are just pure garbage for political reasons.

        But, when there are some serious people involved, it is an order of magnitude better than when “science” is sold directly to Oprah’s viewers.

Leave a Reply to Diana Senechal Cancel reply

Your email address will not be published. Required fields are marked *