“There was this prevalent, incestuous, backslapping research culture. The idea that their work should be criticized at all was anathema to them. Let alone that some punk should do it.”

[image of a cat reading a comic book]

How did the outsiders upend social psychology?

CATRON: We used basic reporting techniques. We’d call up somebody and ask them about thus-and-so, and they’d mention so-and-so, so we’d call so-and-so, and ask about thus-and-so. I’d say, “OK, you’re saying this but the first guy said this other thing.” People didn’t like that.

. . .

GROTH: I think we were dealing with an industry that had repressed itself for 40 or 50 years, so you had an industry filled with professionals who were, often quite legitimately, filled with bitterness and anger. And then you had the new generation who were aware of how the previous generation had been treated and didn’t want to be treated like that . . . The previous generation was raised to be polite, to follow orders, not to make waves. (Except for Paul Meehl.) There was this code of silence where you didn’t talk about problems.

. . .

GROTH: There was this prevalent, incestuous, backslapping research culture. The idea that their work should be criticized at all was anathema to them. Let alone that some punk should do it.

Above quotes from Mike Catron and Gary Groth, from We Told You So: Comics As Art. I changed “Gil Kane” to “Paul Meehl” and “comics” to “research,” otherwise ran as is.

Seth Roberts used to talk about his insider-outsider perspective. Similarly, the editors of the Comics Journal were insiders enough to discuss and have informed opinions about good and bad work, but they were outsiders enough to not owe anybody anything and to not have a stake in the system. Same thing for social psychology, evolutionary psychology, etc.: It can be hard to criticize if you’re coming from the inside, so it ends up being the “punks” (i.e., “methodological terrorists,” “second stringers,” “replication police,” “shameless little bullies,” etc) who do the hard work and take the heat.

17 thoughts on ““There was this prevalent, incestuous, backslapping research culture. The idea that their work should be criticized at all was anathema to them. Let alone that some punk should do it.”

    • “This paper explores two main hypotheses, one that is quite nonintuitive, and one that is fairly straightforward. The nonintuitive hypothesis predicts, among other things, that women who power pose while sitting on a throne will attempt more math problems when they are wearing a sweatshirt but fewer math problems when they are wearing a tank-top; the prediction is different for women sitting in a child’s chair instead of a throne.”

      #Science

    • Thanatos:

      I followed the link, which included some interesting bits:

      1. Simmons, Nelson, and Simonsohn write, “The two p-curve analyses – Joe & Uri’s old p-curve and CSF’s new p-curve – arrive at different conclusions not because the different sets of authors used different sets of tools, but rather because they used the same tool to analyze different sets of data.” Assuming this statement is correct, it calls into question Cuddy’s earlier statement that Simmons and Simonsohn “are flat-out wrong. Their analyses are riddled with mistakes . . .”, a claim for which Cuddy has never given any evidence.

      2. The p-values of 10^-12 etc are indeed ridiculous. An analysis is only as good as the data that go into it.

      3. None of the papers under discussion appear to be preregistered, which implies that we can’t take those p-values at face value, even aside from data problems.

      • My favorite part of Cuddy’s new paper is their justification for the studies they included compared to S&S.

        They write:
        ”’We began by conducting a systematic review of the literature with the aim of identifying
        the complete set of published empirical studies of “power posing” up to December 20, 2016.
        While narrative reviews provide a qualitative description of a body of literature (e.g., Carney et al.,
        2015), systematic reviews are based on a priori research questions regarding the evaluation of a
        body of theoretically relevant literature, which then guide careful and comprehensive study
        inclusion and exclusion (see, for example, Cooper, 2016; Uman, 2011).”’

        But in Carney et al. they wrote:
        ”’Prompted by Ranehill et al.’s commentary, we list in Table 1 all published tests (to our knowledge)
        of expansive (vs. contractive) posture on psychological outcomes.”’

        The S&S analysis used the studies selected by Carney and Cuddy themselves, which they themselves claimed were “all published tests”. But now, it appears that that was only a “narrative review” instead of a “systematic review”, so it was a mistake to use those studies which were supposedly “all published tests”.

        To be fair, I’m sure more papers were published between the time Carney et al. was published in 2015 and Dec. 2016, so it’s definitely possible there are more studies that can be included in the p-curve, but given the excuse of a “narrative” vs. “systematic” review, and the dates of the outliers mentioned in the new Colada blog post, it definitely sounds like more studies were added to the “all published tests” that presumably should have been included in the Carney et al. 2015 paper, unless of course these studies actually are no good but got included in the new paper because they made the p-curve look better.

        It takes a unique form of hubris to criticize someone for the data they used when the data they used was your data which you affirmed was “all published tests”.

  1. So happy you mention Seth Roberts. I liked his writing, and his total lack of deference to authority, so much I even enjoyed his side-trip into the aquatic ape hypothesis.

      • I used to enjoy having lunch with Seth every couple of months. I wouldn’t call him a friend, exactly — there were too many things about him that I found frustrating or didn’t like — but we always had very interesting and thought-provoking discussions. One of the frustrating things was indeed his willingness to believe anything, as long as it wasn’t accepted by the Establishment. This effect was extremely strong; indeed, I really believe that if you told him that something is NOT accepted by the experts in the field, he was _more_ likely to believe it rather than less. And he didn’t just suffer from confirmation bias, he embraced it: he could be instantly dismissive of anything that contradicted his theory-of-the-moment, and would instantly accept anything that conformed to it.

        And yet: well, as I said, our conversations were very interesting and thought-provoking. His disdainful disbelief of just about anything that is claimed to be understood about entire fields (psychology, psychiatry, human physiology, etc.) was good for me, because it pushed me to consider how much I should believe the experts in these fields, and in my own. It takes a lot of skepticism to make a healthy dose.

  2. Playing devil’s advocate as this will unlikely be popular on this forum. What about climate change?

    “GROTH: There was this prevalent, incestuous, backslapping research culture. The idea that their work should be criticized at all was anathema to them. Let alone that some punk should do it.” Climategate emails? Denier label? Punk blogs or punk non-climate scientists (but still scientists in other disciplines) pointing out errors or offering other interpretations?

    “It can be hard to criticize if you’re coming from the inside, so it ends up being the “punks” (i.e., “methodological terrorists,” “second stringers,” “replication police,” “shameless little bullies,” etc) who do the hard work and take the heat.” That fits.

    The preponderance of the evidence may support the policy positions, but is the reaction to criticism with ad hominem really all that different than what is presented here?

Leave a Reply

Your email address will not be published. Required fields are marked *