Skip to content
 

“Tragedy of the science-communication commons”

I’ve earlier written that science is science communication—that is, the act of communicating scientific ideas and findings to ourselves and others is itself a central part of science. My point was to push against a conventional separation between the act of science and the act of communication, the idea that science is done by scientists and communication is done by communicators. It’s a rare bit of science that does not include communication as part of it. As a scientist and science communicator myself, I’m particularly sensitive to devaluing of communication. (For example, Bayesian Data Analysis is full of original research that was done in order to communicate; or, to put it another way, we often think we understand a scientific idea, but once we try to communicate it, we recognize gaps in our understanding that motivate further research.)

I once saw the following on one of those inspirational-sayings-for-every-day desk calendars: “To have ideas is to gather flowers. To think is to weave them into garlands.” Similarly, writing—more generally, communication to oneself or others—forces logic and structure, which are central to science.

Dan Kahan saw what I wrote and responded by flipping it around: He pointed out that there is a science of science communication. As scientists, we should move beyond the naive view of communication as the direct imparting of facts and ideas. We should think more systematically about how communications are produced and how they are understood by their immediate and secondary recipients.

The science of science communication is still in its early stages, and I’m glad that people such as Kahan are working on it. Here’s something he wrote recently explicating his theory of cultural cognition:

The motivation behind this research has been to understand the science communication problem. The “science communication problem” (as I use this phrase) refers to the failure of valid, compelling, widely available science to quiet public controversy over risk and other policy relevant facts to which it directly speaks. The climate change debate is a conspicuous example, but there are many others, including (historically) the conflict over nuclear power safety, the continuing debate over the risks of HPV vaccine, and the never-ending dispute over the efficacy of gun control. . . . The research I will describe reflects the premise that making sense of these peculiar packages of types of people and sets of factual beliefs is the key to understanding—and solving—the science communication problem. The cultural cognition thesis posits that people’s group commitments are integral to the mental processes through which they apprehend risk. . . .

I think of Kahan as part of a loose network of constructive skeptics, along with various people including Thomas Basbøll, John Ioannidis, the guys at Retraction Watch, pissed-off scholars such as Stan Liebowitz, bloggers such as Felix Salmon, and a whole bunch of psychology researchers such as Wicherts, Wagenmakers, Simonsohn, Nosek, etc. This is not to represent a complete list but rather is intended to give a sense of the different aspect of this movement-without-a-name. 10 or 20 or 30 years ago, I don’t think such a movement existed. There were concerns about individual studies or research programs but not such a sense of a statistics-centered crisis in science as a whole.

45 Comments

  1. Wonks Anonymous says:

    It seems rather limiting to say that “the science communication problem” consists of failures to “quiet public controversy”.

  2. Thomas says:

    I’m glad to be associated with this movement, though I’m not entirely sure we need an outright “science” of science communication. I just think we need to get more attuned to the rhetorical problem of writing knowledge claims down and defending them. We need to develop a robust rhetoric of criticism. To that end, BTW, I just back from this conference in Lund, which is very definitely part of the movement you’re talking about. Here, in the managerial sciences, it travels under the banner of “practical criticism”, and is probably also rightly described as “a loose network of constructive skeptics”.

  3. Brian says:

    “I think of Kahan as part of a loose network of constructive skeptics”

    Not sure about this. Cultural theory (Douglad and Wildavsky), which cultural cognition is based on, has been around for decades and is more or less established (although there are still disputes about how much variance it actually explains). And the psychological stuff on “risk perception” (Slovic and others) also dates back to the early 70s.

    Kahan’s stuff is cool but it’s not as original or counter-mainstream as you’re suggesting.

    • Andrew says:

      Brian:

      Just to be clear: if anyone is overselling Kahan’s work, it’s me, not him!

      Regarding the historical background: Yes, people have been aware of biases in scientific understanding and reporting and have been modeling them for decades, but recently there seems to have been an increased sense of their importance. Consider, for example, the recent paper by Christopher Ferguson and Moritz Heene, exploring why the traditional “file-drawer” corrections for meta-analysis are not enough.

      To put it another way, what’s new about this movement is not that it’s new but that it’s a movement.

      • dmk38 says:

        I truly hope I’m not overselling!

        I am very, confident, though, that I am not selling short Douglas & Wildavsky’s cultural theory of risk or Slovic et al.’s psychometric theory. I am always mindful (was in the very lecture that the blog post Andrew links on summarizes) to draw attention to point out that theory & methods my collaborators use originates in that work. I conceive of “cultural cognition” as attempt to integrate the two. I think Slovic, who has been a collaborator in the “cultural cognition” work since its inception, sees it the same way (Douglas, who I had the privilege to meet & converse with too (she died in 2007), was helpful but critical (indeed, wrote an essay that was both about one of our papers).

        I’m perfectly happy not to be counter-mainstream or particularly original in terms of theories & hypotheses. I will feel like I’m pulling my weight if I can contribute to supplying evidence that can help sort out relative strength of competing claims about how things work and what to do.

      • K? O'Rourke says:

        > not that it’s new but that it’s a movement

        I agree – not much more than 5 years it was hard to even interest most statisticians in such issues.

        Something has caused a (widespread) understanding that it is a common problem that causes a lot of harm and there aren’t effective solutions (at least analytical ones).

        I am even noticing (intelligent) non-academics firmly pointing out that work of (esteemed) academics needs to be independently verified before it is relied upon.

  4. Jonathan says:

    I would add to your list Deirdre McCloskey, whose trilogy on rhetoric and work on the “cult of statistical significance” aim to get economists talking about talking.

  5. John Christie says:

    Meehl was, and had contemporaries, sensitive to the issues here in the 50s. Speaking only for Psychology, we’ve been sensitive to these issues for a long time and, even if not recognizing a crisis, recognizing an impending one.

    Perhaps that’s why your list contains so many psychology researchers.

  6. I’m glad you mentioned Ioannidis as well as the folks who are focused more on the social science side.

    This area is one that GiveWell (where I work) has been preliminarily exploring. We’ll write up more in the near future, but for folks who are interested, we’ve posted summaries of many of our conversations with people who are working on these problems here: http://www.givewell.org/conversations#metaresearchconversations

  7. Mayo says:

    And then there are those of us skeptical of (at least a lot of the) “constructive skeptics” and “reformers”, whether their work is to increase or “quiet public controversy over risk ” or to pigeon-hole risk attitudes into their own made-up and reified cultural categories (as with Wildavsky and Douglas). Statistical reformers can be very valuable, and I don’t mean to throw a wrench into the important goal of communicating science with integrity. But the fact is that many of policers require reform themselves, and since someone mentions the very outspoken McCloskey*, she’s a perfect example. http://www.phil.vt.edu/dmayo/personal_website/Cobb_jasa-ziliak&mccloskey.pdf

    http://errorstatistics.com/2011/10/04/part-3-prionvac-how-the-reformers-should-have-done-their-job/
    There are curmudgeons like David Freedman who exposed fallacies with genuine understanding and not too much of an agenda. That is rare. Stanley Young is another; Meehl was too. The Retraction Watch guys are great, as are those working to detect the misuses of statistics found in investigating Stapel. While there are many more, the most fashionable debunkers actually exacerbate the problem (recall the Nate Silver trashing of Fisher, and all of frequentism—he should issue his own retraction*.) Given that many statistical positions are controversial, there’s a tendency of reformers to champion one side and spread their own favored viewpoint, with little sympathy for the position being trampled and lampooned with age-old howlers—never mind that they have been answered. This is a disservice to the public and to statistical philosophy (I mention only some highly visible names here.) I suspect that earning acclaim as a debunker of bad science or bad statistics can itself encourage a tendency to manufacture ever more cases in order to keep the gig alive.
    *I will revise my assessment once they issue corrections.

  8. Rahul says:

    What is the meaning of “to quiet public controversy”? Are these examples of cases where there is a clear consensus among the scientific community yet the wider public continues to be not convinced?

    I was a bit confused.

    Another iffy question might be: Say a scientist is convinced of the right position on, say, nuclear power or vaccines. Yet he finds that his full, complex, rational arguments often fail to convince the broader non-professional audience. Is it then ethical for him to use flawed yet appealing arguments to win people over to the “right” side?

    Do the ends justify the means? Is this, after stripping the jargon, a public-relations / lobbying problem?

    • I had similar questions. I mean the examples given are hardly indicative of “well decided” science

      climate change: I think most people in science agree that the climate is changing, the causes, degree, type, and decision theoretical consequences for what to do about it are all highly uncertain

      gun control: every time I see some “science” on gun control it is highly questionable from either side of the debate (for or against). This problem is exacerbated by huge numbers of confounding effects, and a general politicization that leaves very few researchers who have no politically motivated preconceptions.

      nuclear safety: the risk analysis that was done in the 1950’s and soforth may have been cutting edge then, but it was also wrong in several ways. See fukushima where paleologic evidence for large tsunamis was ignored for example. what to do about reactors that are past their design lifetimes is also a highly uncertain scientific topic.

      so, I don’t get that aspect of all of this. I think failure to communicate science is an important problem, but not because the public is skeptical of these types of topics, more because everyday non-politically-charged topics within and between scientists are already pretty muddled in many fields.

      • Brian says:

        Bit of a side issue, but on Fukushima, the plant design standards and safety measures were based on very crude deterministic analyses. They refused to rely on probabilistic risk assessments, which actually suggested that stricter safety standards were required, because it was “full of technical uncertainty.” (from the Japanese Investigation Commission Report).

        The European Environment Agency published a briefing report that claimed Fukushima showed the limits of probabilistic risk assessment, but this relied on a completely distorted version of events (PRA was totally ignored by the Japanese!) Whether this briefing was politically motivated, or just part of their general skepticism towards risk assessment, I have no idea.

        • This doesn’t surprise me in the least. I think in general everyday Engineers have had a historic distrust of probabilistic anything. At the research level the field has been promoting probabilistic risk assessment for around 20 to 40 years depending on the subfield, but it isn’t a commonplace attitude in engineering even today.

          Much nicer from the perspective of an everyday engineer if they can justify their choice by saying they looked it up in the well respected table that everyone else uses, so it ought to be good enough.

          • Rahul says:

            As a practicing engineer the biggest problem with probabilistic risk assessment was the general lack of data. If I were to do a rigorous assessment and it needed the probability of a certain bolt shearing from fatigue it was very hard to get a consistent estimate amongst engineers.

            I guess human brains are really not wired to be good estimators of probabilities of such things. The problem isn’t the technical procedure. The problem is that we just don’t have the inputs it demands.

            • That’s a good and valid complaint. one of the reasons that this is true is that the historic approach is to collect some data which has all kinds of variance in it, and then pick a value of strength that’s well below the point cloud, and call this the strength of the bolt and stick it in a table. so a bolt that fails with an approximately normal distribution at 65+-5 ksi (thousand pounds per square inch for the non-engineers in the audience) would be listed as having a 50ksi strength. The problem with this approach is that it throws away all the information you’d need to do probabilistic anything.

              It was my impression that in the nuclear power sector probabilistic engineering was common, but not necessarily properly practiced (see Fukushima comment above).

              One of the things I’d like to do (in an abstract way, I’m not in a position to do this) is to develop a course in probabilistic risk assessment where undergraduates work with actual lab data collected by buying random samples of engineering materials and breaking them in the lab, and then publishing an archive of this raw engineering materials data online for others to use.

      • dmk38 says:

        “…quiet controversy …” is likely a bad way to express the objective (I study science communication; I don’t *do* it!), which is rather to make as amenable to recognition by diverse people, as it ordinarily is the best available evidence on matters relevant to individual & collective decisionmaking. The number of empirical issues that polarize ordinary people on cultural grounds is small relative to the vast number that could but don’t. Obviously, too, *what to do* can and will divide people of different values even if they aren’t divided on facts. Fixing the scicom problem, moreover, definitely should be done w/o manipulation–indeed, w/o any attempt to make people believe *anything*.

        on guns: The intensity & politically polarized nature of the controversy on whether, say, concealed carry laws increase or decrease crime is perplexing precisely b/c evidence is unclear. The National Academy of Science has said so! Yet people are divided & by no means randomly — “*plus* they all tend to perceive that the “expert scientific consensus” supports them.

        For amusingly but also disturbingly opportinisitic mischaracterization of “expert scientific consensus,” by authoritative source of “what is known by science,” see this account of New York Times’s handling of National Academy of Sciences reports’ positions on indeterminacy of multivariate regression analyses on gun control & death penalty.

        • Rahul says:

          Reading your analysis, what’s our basic evidence that we do have a problem in the first place?

          Perhaps there isn’t an agreement among the scientists and not in the laymen either.

          Do we really have a SciCom problem that needs fixing?

      • PI says:

        Contrary to public opinion, the vast majority of climate scientists are in agreement that most of the recent warming (past 50 years or so) is caused by anthropogenic greenhouse gases, and the proportion will increase as greenhouse gas concentrations increase. The categories “degree” and “type” are too vague for me to comment upon.

        • To be more specific: we have very little knowledge about the feedback mechanisms related to cloud cover, water vapor, changes in land use, future natural releases of methane and other gasses that could be caused by thawing, etc. Many of these feedback effects could plausibly act in either direction or be of a variety of different magnitudes.

          Because of these feedback mechanisms it’s hard to say in the future whether temperature changes will accelerate, stabilize, turn around to produce an ice-age, etc. When I said “type” I meant various types of consequences, such as storm frequency storm intensity, amount of water level rise, locations for desertification, locations for inundation, consequences for plant and crop growth etc.

          Even if you acknowledge anthropogenic CO2 has been an important contributor to the last 50 year temperature trends it’s still hard to say what the future will hold, what those consequences mean for different people, whether anything we do today will have any given important affect etc. there is plenty of valid scientific uncertainty here to go around.

  9. Brian says:

    Seems like there’s two very different discussions going on here.

    There’s people talking about critiques of scientific practice (e.g. in significance testing, meta-analyses, etc.).

    And then there’s the cultural cognition work, which is focussed on public (mis)understandings of scientific issues. I don’t think it’s ever been used to suggest that *scientists’* factual beliefs systematically differ according to their cultural commitments.

    • Really three. There’s also the scientists’ misunderstanding of each others’ work or even what the important problems are due to cultural issues.

      In the simplest case, researchers in different fields use different language to talk about similar issues. Consider compressive sensing in EE, L1-penalized MLE in frequentist stats (aka the “lasso”), double-exponential prior in Bayesian MAP estimates, etc.

      Now whether those beliefs are “factual” or not is up for debate. But it’s easy enough to find scientists who disagree about methodology or substantive conclusions.

      This kind of cultural difference in science has been studied extensively in the 20th century by philosophers and historians of science. This work is often tied directly to language, particularly semantics [as was most Western philosophy in the 20th century]. Often this was related to work on how theories change over time and how people’s understanding changes over time.

      You can see a live example of different perspectives in one of my recent posts on my own blog: Generative vs. Discriminative, Bayesian vs. Frequentist. I remember asking Andrew about this issue around 6 or 8 years ago and we couldn’t understand each other at all.

      Andrew and Matt and I took several weeks to calibrate our understanding of and language for hierarchical logistic regression models. This ranged across everything from what to call the inputs (predictors vs. features vs. covariates vs. …) to how to talk about levels, factors, etc. I particularly recall struggling with the “varying slope” and “varying intercept” terminology, though it seems trivial now. I had a hard time understanding that Andrew wanted to treat an ordinal quantized response (like income level) as a continuous predictor with values (-2, -1, 0, 1, 2) AND have an intercept for each level; I had to learn how the intercepts pick up slack from the continuous predictor and how that interacts with the hierarchical priors.

      There’s often a generational (or paradigmatic) issue at stake here. Speaking from my own experience over the last 25 years, natural language processing has moved from a largely linguistically-motivated discipline using logic as a modeling tool to a largely application-driven discipline based on statistics. You wind up with two cultures who, to paint with a broad brush, think each others’ work is off topic.

      The computer science and linguistics faculty at CMU and Pitt couldn’t even agree on what counted as linguistics; our joint program split in two when the linguists at Pitt insisted that some topics (HPSG) were off limits for linguistics qualifiers because they were engineering rather than linguistics (the linguists on the CMU side were not best pleased).

      The linguistics program at NSF refused to referee one of my grant proposals with a linguist colleague in the early 1990s because they claimed it wasn’t linguistics. They forwarded it to computer science, who declared it was linguistics, not computer science. Eventually linguistics sent it out for review, where it got awesome marks (presumably from others like me who didn’t understand what linguistics was about), then the program director rejected it because he didn’t think it was linguistics.

      • Brian says:

        “This kind of cultural difference in science has been studied extensively in the 20th century by philosophers and historians of science. This work is often tied directly to language, particularly semantics [as was most Western philosophy in the 20th century]. Often this was related to work on how theories change over time and how people’s understanding changes over time.”

        What work are you referring to? Wittgenstein and Kuhn? Would be interested in reading up on this…

        • K? O'Rourke says:

          A major difference between math and science is that in math the representations (models) are self referential in that if you sketch out a triangle you are representing a perfect triangle no matter how poor your sketch actually is where as in science the representations are of empirical things and are always wrong.

          Linguistics (not sure if Bob would fully agree) is about represention and representions or what CS Peirce coined as semiotics. So should not be surprising that philosophy of science wanders into linguistics.

          If you are new to this, you might find Ian Hacking’s book Representing and Intervening, Introductory Topics in the Philosophy of Natural Science, Cambridge University Press, Cambridge, UK, 1983 a good place to start.

          • I’d say that semantics, a subfield of linguistics, is about the relation between language and the world. Whether you want to call what language or the brain gives you a “representation” is very contentious. It gets back to Rorty and the mirror of nature. Many philosophers believe it’s deeply a human (or more precisely cognitive agent) construct, not just a mirror of nature. Others like Putnam argue for natural kinds — essentially that we “discover” meanings that naturally divide the world up rather than create them.

        • You could probably do worse than checking out the links from:

          http://en.wikipedia.org/wiki/Philosophy_of_science

          I’m particularly partial to Richard Rorty’s work and Wittgenstein’s, but reading eitehr of these guys cold is asking for trouble. Rorty’s Philosophy and the Mirror of Nature would be rough if you’re not already immersed in the logical positivism of the 20th century and the historical philosophy of science. I would, on the other hand, recommend the first chapter of his Contingency, Irony, and Solidarity (browse the editorial reviews by following the Amazon link to get a feeling for what’s going on).

          Hacking’s good, as are Andrew’s faves, Karl Popper (the Wikipedia article linked summarizes his philosophy of science). Again, all this philosophy is rough going and really needs a historical perspective. Carnap and Hempel were also writing about similar topics around the same time. Kuhn and Putnam and others came later.

  10. Great post!

    Just got “Surfaces and Essences: Analogy and Fire of Thinking” by Hofstadter and Sanders in the mail this morning.

    Maximizing the reach of our ideas via effective communication is crucial for their success, something statisticians often forget.

    • “Surfaces and Essences: Analogy as the Fuel and Fire of Thinking” that is

    • I haven’t read this book, but I agree with its thesis. It’s not as new as they make out.

      (The later) Wittgenstein’s objections to the logical positivist tradition made heavy use of how we reason by analogy (cf., his notion of “family resemblance“). Quine put the final nail in the logical positivist coffin with his paradigm-shattering paper, Two dogmas of empiricism (see particularly the section of the Wikipedia article on holism).

      Herb Simon, a psychologist and AI researcher at Carnegie Mellon while I was there, always talked about the measurable differences between analogical thinking and pattern matching vs. deductive thinking. It’s been known for ages that humans are very speedy with analogies but very slow (and buggy) at logic. For instance, Johnson-Laird’s work on mental models was required reading for my cog sci Ph.D. in the mid 1980s, but cites work going back much further. One of Herb’s favorite sources to cite was the super-cool chess memory experiments of de Groot, which showed that experts could memorize chess patterns, with mistakes by experts often resulting in different, but strategically analogous board positions (unlike mistakes by novices); turns out Simon did some of the more interesting work in the area himself following up on de Groot’s original work with de Groot.

      • K? O'Rourke says:

        Neat that you mention Herb Simon, my MBA supervisor was one of his Phd students and my supervisor was most interested in Simon’s _protocol analysis_ which Simon had developed to address his belief that experts could not verbalize how they actually solved problems but create fictions of how they like to think the actually did it.

        So in protocol analysis the experts verbalize their thinking out loud as they solve problems and then a programmer writes a program to implement that verbalized problem solving method. Part of my project was to listen to tapes of women shopping for clothes (I’ll avoid commenting on women being immediately qualified as experts at that) and then write a Lisp program to _shop_ for them.

        This pertains more to Andrew’s comment below that “the hard part is the “speak what I’ve done” bit” is already too difficult for most (according to Simon).

        But maybe if one is very self-attentive they can train themselves to be less wrong at this and more accurately get what reasoning was actually involved in their work. That would be very worthwhile sharing with oneself and others.

        • That was my experience in being run through several protocol experiments by Herb Simon’s grad students. One was a GRE analytical task. I love those problems. But doing them while talking about how you do them was really challenging.

          Or as someone put it to me recently, you lose 20 points of IQ standing at the board.

          Much of our reasoning is “tacit” in the psychological sense that we don’t have access to it. I don’t know how I ride a bicycle or serve a tennis ball. My body just does it. Similarly, I don’t know how I manage to figure out which meaning of a word someone’s using or which “he” they’re talking about, but it seems as easy as riding a bicycle.

  11. Fernando says:

    So let me disagree, for the sake of argument. Why should scientists be good scientist, teachers, communicators, grant writers, etc… This is highly inefficient, almost autarky.

    Why not specialise in comparative advantage. Why not produce research like we produce movies. Why not have directors, script writers, actors, etc. Why not give each of them credit in their respective roles.

    In lab sciences there is something close to it but, as far as I know, there is no explicit role for science writers and communicators. Shouldn’t the English and Statistics departments have a collaborative program. Just saying.

    • Andrew says:

      Fernando:

      I don’t know how I could do science if I were not communicating it. The hardest part is communicating to myself. I’m my own toughest critic.

      Another way to put it is: To understand what you do, to be able to reproduce what you do, you need to be able to write what you’ve done. I don’t think this can really be contracted out. I mean, sure, I could speak what I’ve done and then have someone else write it up (as in one of those “as told to” books), but the hard part is the “speak what I’ve done” bit, not the typing of the document.

      • Fernando says:

        Andrew:

        We agree but we are talking about different things.

        Of course language, notation, and mental props are key. Like you, I could not think without them.

        But what I am referring to is communication of research flingings to a broader audience. You know, The Elements of Style, The Craft of Scientific Writing, and so on.

        Admitedly, the distinction btw language and communication is not binary. Yet I still think there are gains to be had from specialization and exchange. My sense is many scientists rely on professional editors to edit their work. More generally I would enjoy collaborating with a languages department, and work alongside students of scientific writing. Teamwork as opposed to outsource.

        • Fernando says:

          PS Science is a productive activity and as such can benefit from insights in manufacturing, quality control, etc. including specialization, division of labor, chain production, etc. Also like software engineering.

          Do you run a lab? What is your experience?

        • As far as I know, the “editors” involved in science are entirely of the “editorial opinion” type (ie. they choose what gets published) almost no-one I know of actually uses a copy-editor type editor for scientific writing, someone who might improve the phrasing, standardize the presentation, point out where things are unclear. That’s all done by the scientists themselves, and the good ones shop their papers around to their peers to get feedback. The areas I’m most familiar with are biology, bioinformatics, geosciences, mechanics, and engineering.

          • Fernando says:

            @Lakeland

            Actually I have come across several journals in social science that provide links to third party editorial services.

            The intended audience is mainly scientists authors whose native language is not English.

            • I suppose non-native speakers is a different topic, I was thinking entirely of either native speakers, or people who have been doing science in English for many years. Also I suppose that social sciences may have a different culture with respect to copy-editor type editing. In my experience in physical and biological sciences, the average copy-editor would be somewhat useless, you’d need someone with at least an undergraduate major in the scientific field to be able to work with the content of most biology or physics or engineering journal articles. otherwise you wouldn’t know what to make of the text.

      • Rahul says:

        ” The hardest part is communicating to myself.”

        I don’t think that’s the sense in which most people think of the Science Communication problem.