How to improve science reporting? Dan Vergano sez: It’s not about reality, it’s all about a salary

I happened to be looking up some things on cat-owner Dan Kahan’s blog and I came across this interesting comment from 2013 that I’d not noticed before.

The comment came from science journalist Dan Vergano, and it was in response to a post of Kahan that discussed an article of mine that had given advice to science journalists on how to avoid getting fooled by junk science (the sort of thing that nowadays I’d call PPNAS or Psychological Science-style research on himmicanes, power pose, and the like, and which a few years ago I was calling schoolyard evolutionary biology of the Kanazawa variety).

I’d given journalists the advice, when evaluating some scientific claim, to ask experts in neighboring fields, for example, asking a cognitive psychologist to comment on the work of a social psychologist, or asking a computer scientist for views on the work of a statistician, to have a chance to get a broader view. The point was that various claims can be hard for a reporter to directly evaluate, but there should be some option beyond simply jumping off the press release.

Here’s what Vergano wrote:

I have to admit some confusion at Gelman’s suggestion. I attended a graduate science reporting program 20 years ago and teach science journalism now. Finding a collection of credible external sources to comment on science-related news (a quick, admittedly ad hoc, form of peer review often drawn from a literature search on a topic) is standard operating procedure for science news reporters. It is largely what we do, asking outside experts to sanity check new results. When enough of them agree that something is reasonable or newsworthy or both (or not), we report those reflections in our coverage of the news. That is how it is supposed to work.

What I suspect Dr. Gelman (whom I greatly respect) is seeing is the economic destruction of news reporting as a profession brought about by the market failure of the advertising business to support public-minded reporting. When there is no money to pay for the time required to do the kind of sourcing that everyone in the profession has agreed is good and useful for at least a century, then it doesn’t happen. The result is web pages full of single-source pieces masquerading as news. This is the world that the collapse of the national advertising business from a $55 billion industry in 2006 to a $22 billion one in 2012 has delivered. The industry has shed tens of thousands of the reporters who used to do the kind of vetting with sources that he is advocating. Here are details from the Pew Research Center: http://stateofthemedia.org/

We don’t need to instill a sourcing ethos in journalists (it’s already there), we need to provide them a way to make a living while delivering well-sourced news reports. Until advertisers place a premium on this sort of news, we will see continued erosion of sourcing practices. This is driven by economics, not culture.

Vergano makes a good point. There will always be skeptical, thoughtful reporters such as Ed Yong and Sharon Begley, and there will always be Malcolm Gladwell-style reporters who, for better or worse, prefer to tell a clean story rather than cover scientific uncertainty. But there are lots of journalists in between, and if they’re in an environment where it’s all about quick page views, we’d expect them to do a lot worse than in a world where they have more space to operate.

And then you get an unfortunate feedback effect, where scientists are encouraged to become Ted talk stars, and corporate and university public relations offices adapt to the new media landscape.

I don’t think new media is all bad, or even bad on net—I blog, after all!—but I agree with Vergano that I was missing something by focusing on individual reporters and not stepping back and looking at their incentives.

14 thoughts on “How to improve science reporting? Dan Vergano sez: It’s not about reality, it’s all about a salary

  1. Perhaps you could have not “missed something” by following your own recommendation — before giving advice to journalists, asking journalists (or those in a neighboring field) for a sanity check. Feels very “meta”.

      • I will complete the meta-box by replying here, to say that Dr. Gelman is letting me off too easy: There are real problems in the culture of science journalism that UCSC’s Tom Hayden has written about (I chimed in on this at LWON: http://www.lastwordonnothing.com/2013/08/01/conversation-with-dan-vergano-the-science-ghetto/) and people like Boyce Rensberger and Vince Kiernan (especially) noted two decades ago.

        Still, underneath this cultural problem are those incentives: you can’t get professional work with trade wages. This is a classic market failure for a public good, quality news. For news of science in this discussion, but it is true in most every other sphere of reporting.

        • Is it unrealistic to consider a future where quality media establishments change their sourcing practices, so that there’s some way that a consumer could differentiate between single source copy and paste, editorial review and outside review?

          The current model send to rely on pure trust won slowly over time. For better or worse there is very little trust in media today, and the consumer properly updating reputations on their own has not been a successful model.

          I don’t understand why reporting cannot simply provide full transcripts of all interviews today. The cost of storing plain text is minimal and one would think responsible media transcribes or records interviews anyway. Instead everyone is asked to trust that any pulled quotes aren’t misleading or have faith that some external validation was made before sending a press release to be reprinted.

  2. After one gets a decent job, it’s so easy to forget how much low pay is a barrier to quality work. Personally, I spent years being frustrated at journal proofreaders. In my experience, their work is often slipshod. Sometimes, I’ve felt that the journal proofreader has introduced more errors than they corrected. I imagine that many of your readers have had similar complaints and frustrations.

    But then I found out that many journal proofreaders are paid )^&*%. Total *(&*%. Changed my perspective on the situation entirely.

    Thanks for the thoughtful post.

  3. This makes a lot of sense and explains some of the range in quality I see in reporting–sometimes from the same author. I enjoy reading the Science of Us pieces in New York Magazine–especially Jesse Singal’s, some of which are terrific–but often find myself shaking my head. The journalists probably have so many pieces in queue, as well as last-minute assignments, that they have to dash them off. Sometimes they go for what I’d call “token skepticism”–tossing in a quizzical remark for good measure, but not going nearly as far as they could or should.

    An example is today’s piece by Melissa Dahl, “A Small but Deeply Sad Detail in that Study on Facebook and Friendship,” about a study, published in PNAS, that says Facebook users live longer than non-users. The “deeply sad detail,” according to Dahl, is that the lower mortality risk is associated with friendships *accepted*, not friendships *extended*.

    This is only “deeply sad” if (a) you equate Facebook “friending” with substantial relationships and (b) you think they study has weight.

    I am not a Facebook fan–I have an account but use it minimally–so I’ll leave (a) alone. As for (b), the study has some obvious problems. The proportion of the deceased is high, as the researchers compared *all* Facebook users who died between 2012 and 2013 with a stratified sample of living Facebook users.

    “To ensure age and gender covariate balance in our analyses, we compared all deceased individuals on Facebook to a stratified random sample of nondeceased individuals … from the full and voter populations described above. There were 179,345 people in our age- and gender-based probability sample of Facebook users born between 1945 and 1989, of whom 17,990 died between January 2012 and December 2013; 89,597 were also present in the California voter record, of whom 11,995 had died between January 2012 and December 2013.”

    I’d think this would skew things considerably. (The article doesn’t mention this at all.)

    That’s just the beginning of the problems that I see (from a layperson’s perspective) in the study .

    But where I previously would have faulted the journalist, I now recognize that they have deadline after deadline and have to write things up and move on.

    (I sympathize; I don’t have time to look into many of these studies. I do intend to look closely at this one, though.)

    • P.S. Two points of clarification: When I say “The article doesn’t mention this at all,” I mean the New York Magazine piece. Also, the study’s authors do warn against jumping to conclusions. I am writing about the study on my blog and won’t go on about it here; I just wanted to clarify those two points. Also, Dahl expresses a little more skepticism in her article than I gave her credit for (though not as much as I think the study warrants).

      • P.S. I have since been learning about case control studies; at this point I have fewer qualms about the selection method than about the act of comparing deceased and nondeceased Facebook users (which seems extremely problematic). I have written 2.5 posts about this study on my blog and will continue with two more on Sunday or Monday.

  4. Funny to hear Ed Yong described as a skeptical, thoughtful reporter. Palaeoevolution, on which he often writes, is in a worse state than social psychology. You can’t even get to the stage of doing stats wrong when you can’t do experiments directly on the central issues in the first place because your job is discovering what happenED instead of what happenS. You should therefore be particularly careful, but instead, practitioners simply abandon all suitable scientific principles, and make all decisions on a social basis.

    I gave a wry smile at hearing you recommend seeking advice from explicitly a cognitive psychologist or computer scientist! I am both and can identify both groupist/individual corruption/self-deception, and also major flaws at the heart of what little IT is used, but Yong never questions any current sacred tenets and never seeks outside (in any sense) opinions.

    Vergano unwittingly highlights the deadly flaws in the profession of science journalism he teaches:

    “Finding a collection of credible external sources…”

    He has nothing but a social basis on which to grant credence. (He can recognise group approval but can’t recognise when the group is both wrong, and only joinable by those accepting group errors.)

    “…to comment on science-related news (a quick, admittedly ad hoc, form of peer review…”

    The latter understood as the heart of science only by those doomed to be involved in science but without a basic understanding of it.

    “…often drawn from a literature search on a topic) is standard operating procedure for science news reporters.”

    Try getting published in a corrupt field without being corrupt. You can’t get into the literature, so this will stop you getting questioned.

    “It is largely what we do, asking outside experts to sanity check new results.”

    It’s because there are no examples of Yong ever having done that, that he is such a nuisance.

    “When enough of them agree that something is reasonable or newsworthy or both (or not), we report those reflections in our coverage of the news.”

    When enough in a field realise that big improvements are often easier to block than to be adjusted to, the better scientific work is, the harder it gets to sell… on top of the hard job of doing it in the first place. And taking money out of the equation would make no real difference to palaeontological science actually.

    Until journalists understand that science is a process of duelling theories, and that all of it at one time had support from one person alone and so cannot be handled democratically, journalists wil be no more than a series of self-righteous road-blocks to the best scientists. And it would help if journalists learned that science is not the Kuhnian “what scientists DO do” but the Popperian “what they SHOULD do”.

    “That is how it is supposed to work.”

    It isn’t. Good – indeed simply decent – scientific journalism has skepticism as one essential component (and not just imitation skepticism) and genuine competence in both the specific field and the nature of science in general, as another.

    Last year when yet another piece of genuine science showed once more that peer review only corrupts science, Yong was preparing what was described as a “remarkable new book” on BBC Radio 4, on how your gut flora was important for your health. My mother told me that before he was born. Neither the BBC nor Yong ever acknowledge the corruption behind what they largely use to justify their mispractice.

    • “Strangetruther,” a k a John Jackson makes an utterly unfounded attack on Ed Yong, not even bothering to cite a single piece by Yong in evidence of his being a “nuisance.” Jackson ignores all the reporting in which Yong fields different interpretations of new studies, or in which he tears down research that’s weak and doesn’t get replicated.

      His dismissal of Yong’s new book about the microbiome is downright laughable. His mother told him about his gut flora? Well, that’s nice. In reality, there’s been a vast amount of research on the microbiome since then. And Yong applies a very skeptical eye to that research in his book and in his reporting.

      • A lot of what Strangetruther says does sound rather polemical. I’ll restrict commenting to the paragraph quoted below, which is on a subject which I know a little about (from serving on Ph.D. committees and participating in journal club discussions)

        “Palaeoevolution, on which he often writes, is in a worse state than social psychology. You can’t even get to the stage of doing stats wrong when you can’t do experiments directly on the central issues in the first place because your job is discovering what happenED instead of what happenS. You should therefore be particularly careful, but instead, practitioners simply abandon all suitable scientific principles, and make all decisions on a social basis.”

        Yes, in paleobiology you can’t do experiments and your job is discovering what happened rather than what happens, but I’m not convinced that “practitioners simply abandon all suitable scientific principles, and make all decisions on a social basis.” Discovering what happened and not being able to do experiments is the case in some other fields of science (e.g., geology, astronomy). But that doesn’t mean we can’t get some knowledge in these fields. In paleobiology in particular, there are methods such a phylogenetic analysis that can give some information (with some estimates of uncertainty), and sometimes structural analyses can be instructive (although they can also be misleading). It is a difficult science, but not to be written off entirely because of that. I realize there is some poor research, but that doesn’t mean all research in the field is poor.

        And since there is so much “bad science” in fields (like psychology) that can do experiments (but often approach them in a “that’s the way we’ve always done it” manner rather than a careful, skeptical manner), I don’t see that Strangetruther’s comparison is really saying that paleobiology is any worse than psychology.

  5. Carl Zimmer makes an ideal alternative to Ed Yong as an example of unthinkingly channelling material from poor scientific research straight into the public’s mind. Which “piece by Yong in evidence” would be best to choose as an example of Not Doing Something, Carl? Yong has never mentioned that standard phylogenetic analyses using bone shapes simply cannot be trusted to recreate bird evolution or that of closely-related dinosaurs. (It’s different for molecular data, for reasons I explain in my book). Why don’t you instead give us a single example where he has stated that convergence and parallelism, which undeniably distort bone-shape-based trees in post-Cretaceous mammals and birds, is likely to render mesozoic dinobird trees largely meaningless? And where have you said that yourself?

    Which of you has ever expressed doubt over the widespread claim that of the hundreds of fossil bones of African apes over a million years old, the number that are nearer the chimp or gorilla lineages than that of humans, is… zero? Has either of you ever done anything real involving statistics? I hope not.

    To clarify our respective roles in this scandal, save other readers some time and just answer exactly why you think the absence of chimp and gorilla ancestral fossils is both believeable and not worthy of investigation, and why you think all workers who disvelieve the usual dinobird family trees, must have all their work both blocked and denigrated even when they have genuine academic qualifications and experience in the fields of information science concerned – unlike you and too many of those you channel.

    Just those two – for once please! I think we’ve had enough of general superficially plausible expressions of disapproval; it’s time you showed how good you scientific judgement really is.

    Andrew Marr’s comment on Yong’s book referred to the overall principle of the importance of gut flora. My book, which you have never recommended but never succesfully rebutted, referred to undoubted serious widespread failings in science. YOUR science.

  6. Sorry to have alarmed you Martha, but how disinterested should you sound when reporting a multi-faceted scandal?

    If you think peer review done the way it is, is Not a huge opportunity and excuse for interference, bias and corruption, please state your position explicitly along with experimental results which support it. I really do want you to commit yourself on this one – What is your exact position on this speciic issue, and why?

    I’ve presented enough evidence myself, but if you don’t want to believe an alarming piece of news unless it’s from someone on a celebrity list, why not read “The Enemy Within”. That will be a useful start, and I can provide follow-up refs if you haven’t caught the later evidence.

    Please don’t simply say “…there are methods such as phylogenetic analysis that can give some information…” to someone who has done as much work in that area as I have and when you know nothing of the subject. And please don’t criticise anyone until you’ve understood their arguments. You won’t understand mine until you you’ve read my book… and your not being able to understand would not entitle you to say it’s wrong.

    It is misleading to suggest that I said we Couldn’t Get useful information from certain fields where experiments on the basis issues were difficult. [“But that doesn’t mean we can’t get some knowledge in these fields.”]

    If decisions within science are not based on scientific considerations, what do you suppose that are based on? Which of your comments you made are the most scientific? That you’ve served on committees and participated in club discussions?

    Where did I say that because there is some bad work, all of it is bad? I didn’t but you suggested I did. However it does so happen that the major underlying understandings we aim for in palaeontology Are unbelievable and yet widely propagated. [“It is a difficult science, but not to be written off entirely because of that. I realize there is some poor research, but that doesn’t mean all research in the field is poor.”]

Leave a Reply

Your email address will not be published. Required fields are marked *