Beyond the Valley of the Trolls

In a further discussion of the discussion about the discussion of a paper in Administrative Science Quarterly, Thomas Basbøll writes:

I [Basbøll] feel “entitled”, if that’s the right word (actually, I’d say I feel privileged), to express my opinions to anyone who wants to listen, and while I think it does say something about an author whether or not they answer a question (where what it says depends very much on the quality of the question), I don’t think the author has any obligation to me to respond immediately. If I succeed in raising doubts about something in the minds of many readers, then that’s obviously something an author should take seriously. The point is that an author has a responsibility to the readership of the paper, not any one critic.

I agree that the ultimate audience is the scholarly community (and, beyond that, the general public) and that the critic is just serving as a conduit, the person who poses the Q in the Q-and-A.

That said, I get frustrated frustrated frustrated when people don’t respond to questions and criticisms from me. This frustration comes in various flavors. Perhaps most unreasonable (from my part) is that after I slammed Gregg Easterbrook for an incompetent political column, his editors at Reuters made most of the corrections I suggested (including the links I’d supplied) but without acknowledging me in any way. I still think I was in the right here (it’s only common courtesy to acknowledge the source of your information), but ultimately the information got out there.

I got more frustrated after Arthur Brooks garbled some poll data on happiness and David Brooks mainstreamed some anti-Semitic fake stats, with both these errors coming on the New York Times op-ed page. What frustrated me was not the erroneous numbers—after all, I make mistakes too!—but rather the decisions by the writers (and their editors) not to run corrections, even after the errors were pointed out repeatedly, by myself and others. (Indeed, in neither case was I the one to the discover the mistake; it was other people who pointed these cases out to me, and then I explored them further.) I keep coming back to these examples partly because I think the two Brookses can do better: both of them seem to me to be serious, thoughtful people would like to use their writings to have impact on the world. In David Brooks’s case in particular, I hate to see him take the dark path of disseminating what is at best sloppy research and what is at worst disinformation, just because it happens to align in some ways with his political views.

Or take the case Basbøll is talking about, a paper that recently appeared in a journal of business management. The paper got some favorable publicity, the editor of the journal promoted the paper in a blog, and then the paper came in for some serious criticism. The consensus (at least to me) seems to be that the paper is reasonable if a bit overstated in its conclusions, not quite as good as the journal editor claimed but making a real, if specific, contribution to the literature. But during the discussion some of the commenters were called out as “trolls.” My point: the “trolling” seemed to be necessary to move the scholarly and scientific discussion forward. Had everyone been super-polite, the journal editor could’ve just remained in a state of complacency about the paper, but instead the strong comments motivated him and the authors of the paper to respond.

There’s also a sense of symmetry, a fight-trolling-with-trolling syndrome. For example, I’ve occasionally written about a sociologist who’s written a series of papers about sex roles and sex ratios of babies. I’d call this guy a science troll. He makes provocative claims based on very weak data and gets lots of attention making “politically incorrect” statements which would be unremarkable if overheard at the local bar or country club but get attention because of their purported scientific basis.

Or consider the recent “Psychological Science”-style papers by various authors that we’ve been discussing during the past year on this blog. I think they too often represent trolling, of a sort, getting attention based on politically loaded claims which are basically unsupported by data. And are unsupported by theory too, in the sense that their theories are vague enough to explain just about any pattern they see—or its opposite.

Just to be clear, I wouldn’t label the administrative science paper discussed above as trolling. It’s a bit of narrow research with broader implications. Which I like a lot; it’s the “tabletop model” of social science.

I’m just pointing out that, in our discussion of flaws in published papers, we are in many ways living in a world whose parameters are set by scientific publishing and the news media, a world in which the most prestigious scientific journals are known as “the tabloids.” A landscape of trolls.

16 thoughts on “Beyond the Valley of the Trolls

  1. “In David Brooks’s case in particular, I hate to see him take the dark path of disseminating what is at best sloppy research and what is at worst disinformation, just because it happens to align in some ways with his political views.”

    I wonder how intentional the humor in this statement was. It’s like saying, “I hate to see a fish take the dark path of continuing to swim around in the ocean.”

    • Zach:

      I don’t know, of course—I’ve never met Brooks and, even if I had, I still probably wouldn’t have that much insight into his motivations—but it seems reasonable to suppose that he wants to disseminate truths and not be a conduit for falsehoods. I just suspect that, like so many people, he does not want to admit that he screwed up. I’m guessing that if he could do it all over again, he’d not have published those erroneous numbers, not just because he wouldn’t want to get caught at it, but because (I’m guessing) he feels his job is to present truth, no matter how uncomfortable this truth is to his readers. Unfortunately, in this case the uncomfortable part is that Brooks made a mistake and doesn’t want to admit it.

  2. I asked this in another comment thread before: Do you mean the journal Psychological Science when you say “Psychological Science”-style papers?

  3. I disagree that the most prestigious scientific journals are akin to tabloids. Nature and Science studies get more media attention, sure, but they are not actively trying to publish crappy research, or taking a conscious decision to sacrifice quality or robustness for sensationalism.

    The quality of reviewers at Nat./Sci. is as good, if not better, than a second tier, less prestigious journal. So also, I really doubt that the quality or credentials of people publishing in Nat-Sci are any worse than the rest of the journals out there.

    Just because they get more attention, does that make them Tabloids? What’s the hard evidence that prestigious journals are worse than the rest?

    • When journals like JPSP refuse to publish things like failures to replicate Bem (and otherwise refuse to publish criticisms and make retractions like pulling teeth), one really has to wonder what exactly they are optimizing for… Prinz et al 2011, incidentally, specifically mentions that impact factors do not correlate with reproducibility.

      • Excellent pointer. For the curious:

        Prinz et al 2011, incidentally, specifically mentions that impact factors do not correlate with reproducibility.

        There is a great deal of tension between the respect that (some!) of the public gives scientific results and scientists — we went to the moon after all — and the realities of the process, e.g. just how much terrible research is out there.

        • Yep, both Bayer and Amgen have reported this issue. The Amgen article below reported reproducing only 6/53 results and that impact factor was unrelated to reproducibility.

          The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct. Paradoxically, because the studies are usually underpowered to disprove this (easiest possible thing to disprove) hypotheses they “p-hack” without even realizing it is a problem. Committees and reviewers demand p-values without understanding what they are. Alternative explanations for significant differences are only given token attention unless they do not match with the direction predicted by the theory.

          After 2-3 years wasted memorizing what looks like 70-90% incorrect information, PhD students do not have time to learn to think critically about what they are doing. Then they publish their own flawed research reports and join the club of people with vested interest against recognizing the problem. This has apparently been going on for decades.

          “Over the past decade, before pursuing a particular line of research, scientists (including C.G.B.) in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed ‘landmark’ studies (see ‘Reproducibility of research findings’). It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.”

          “Some non-reproducible preclinical papers had spawned an entire field, with hundreds of secondary publications that expanded on elements of the original observation, but did not actually seek to confirm or falsify its fundamental basis.”

          http://www.nature.com/nature/journal/v483/n7391/full/483531a.html

  4. Pingback: I agree with this comment « Statistical Modeling, Causal Inference, and Social Science Statistical Modeling, Causal Inference, and Social Science

  5. This discussion seems a bit like the PCBs in the Hudson that never go away. Note that the post that precipitated this discussion was about precisely the problem of research aimed at being media-friendly. It ended like this: “Students enjoy seeing their professors cited in the news media, and deans like to see happy students and faculty who ‘translate their research.’ This favors the simple over the meticulous, the insta-publication over work that emerges from engagement with skeptical experts in the field (a.k.a. reviewers). It will not be a good thing if the field starts gravitating toward media-friendly Cheeto-style work.”

    Apparently describing the ASQ paper as “really nice” (along the way to the main point, which was that it’s not good for science to aim for media friendliness) is ludicrously over-the-top praise, inviting righteous trolling by people who never read the paper.

Comments are closed.