Skip to content
 

Ted Versus Powerpose and the Moneygoround, Part One

So. I was reading the newspaper the other day and came across a credulous review of the recent book by Amy “Power Pose” Cuddy. The review, by Heather Havrilesky, expressed some overall wariness regarding the self-help genre, but I was disappointed to see no skepticism regarding Cuddy’s scientific claims. And then I did a web search and found a completely credulous CBS News report: “Believe it or not, her studies show that if you stand like a superhero privately before going into a stressful situation, there will actually be hormonal changes in your body chemistry that cause you to be more confident and in-command . . . make no mistake, Cuddy’s work is grounded in science.”

Actually Cuddy’s claims were iffy from the start and suffer a credibility gap, given the failure of a large-scale replication of her key experiment, as discussed a few months ago here under the clever title “Low-power pose” and in careful detail by Joe Simmons and Uri Simonsohn on their blog.

This all inspired me to write, with Kaiser Fung, an article for Slate exploring the mismatch between what one might call external and internal views of science:

– For outsiders, people who read the New York Times or Slate or Malcolm Gladwell or Freakonomics who tune into Ted talks, science is a string of stunning findings by heroic scientists, daring to think outside the box.

– But when insiders see hyped findings about himmicanes or college men with fat arms or ESP or sex ratios of beautiful parents or wobbly stools or embodied cognition or power pose, we laugh or we sigh (depending on our mood), knowing that one more bit of junk science got through the filter.

This is not to say that none of the effects being talked about are real, just that the studies tend to be too noisy to tell us anything useful, and we know by now the problems of creativeness.

The insider-outsider distinction is not always so clear: Daryl Bem and Ellen Langer are Ivy League professors, after all, and even the much-mocked Satoshi Kanazawa teaches at the respected London School of Economics. But all three of these researchers are outsiders when it comes to facing the statistical crisis in science.

Anyway, the focus of our Slate article was the yawning gap, as we put it, between the news media, science celebrities, and publicists on one side, and the general scientific community on the other. Various exceptions aside, it’s my impression that most scientists are a bit embarrassed by headline-grabbing claims on gay genes or ovulation and voting or whatever: We know that Science and Nature and PPNAS sometimes like publishing such papers, and that they can get lots of press, but we don’t take it seriously.

Meanwhile, your ordinary civilian Gladwell-readers can get the impression that these flashy findings are what science is all about.

But . . . then I read the comments on our Slate piece. And what struck me is that nobody came to the defense of the power-pose researchers. But it wasn’t even that. Even more striking was that none of the Slate commenters seemed to take that study seriously in the first place. It wasn’t like: Hey, this is interesting, that much-touted power pose study was in error. It was like: Yeah, what a joke, who’d ever think that that could make sense.

This is good news: Despite all the influence of the New York Times, CBS News, NPR (yes, of course, NPR too), Amy Cuddy’s publisher, and Harvard Business School, still, after all that, 100 Slate readers assume it’s all a scam. That’s good to hear. All the king’s horses etc.

Anyway, I’m not sure what to make of this division between the gullibles and the skeptics. On one side you have the NYT, CBS News, NPR, Science, Nature, PPNAS, Malcolm Gladwell, a major book publisher, the publicity department of the Harvard Business School, and the TED organization (whoever they are). On the other side, Eva Ranehill, Anna Dreber, Chris Chabris, Kaiser Fung, Uri Simonsohn, E. J. Wagenmakers, Arina K. Bones, me, . . . and several dozen random people who write in the comments section of Slate.

P.S. Just to be clear: I don’t think this is a debate about personalities and I’m not trying to personalize this. I’ve never met Amy Cuddy or her coauthors, or, for that matter, Eva Ranehill or any of her coauthors on the paper that reported the non-replication of the power-pose finding. I’ve never met Daryl Bem or Ellen Langer or Satoshi Kanazawa or Malcolm Gladwell either. It’s not about good guys and bad guys. It’s about different experiences and different perspectives. In this case, I was interested to see that these Slate readers had an ambient level of skepticism which actually in this case gave them a clearer perspective than that of NPR editors etc. (I can’t really speak to the sophistication of the Ted talk organizers because maybe they know this is iffy science but they’re hooked on the clicks, I have no idea.)

37 Comments

  1. Clyde Schechter says:

    Well, before you celebrate the sophistication of the public, is there a selection bias at work here? Who reads Slate, and, more specifically, who comments at Slate? (I don’t know–I’m not familiar with Slate. Just raising the question of whether they are representative of the public at large.)

    • Ed says:

      Clyde beat me to it, but yeah, seems like you might want to compare the comments on Slate to another source, like say CBS to see if they differ. I’m skeptical that there isn’t some small, but still alarmingly large portion of the populace who eats this stuff up and is secretly powerposing on a daily basis. I like your optimism better though.

    • Steve Sailer says:

      Commenters can differ ideologically from article to article depending upon who links to the article. A link from Drudge, say, can bring a lot of right of center commenters.

      Also, different publications have different barriers to entry to comments, such as requiring the commenter to sign up first versus laissez-faire commenting, which can impact how representative comments are of readers’ views.

      Some publications appear to have strategies designed to stimulate skeptical comments in order to juice their click numbers. For example TheAtlantic.com runs a lot of lowbrow articles by obscure young would-be pundits in the common “Straight White Men Suck” genre, which tend to generate a lot of comments from readers pointing out simple flaws in the author’s logic. The New York Post has recently followed The Atlantic into the game of denouncing whites to boost traffic from whites. (Or perhaps I’m coming up with an overly clever theory to explain the phenomenon.)

  2. Keith O'Rourke says:

    > what struck me is that nobody came to the defense of the power-pose researchers
    There is some selection to comment modelling yet to be done here ;-)

    Yea, it was too obvious, i should have saved a comment.

    • Andrew says:

      Keith:

      Yeah, fair enough. Really there should be some people who do come to the defense of the power-pose researchers. After all, the power-pose theory could be true. As a statistician, all I can say is that the evidence presented in favor of the theory is a lot lot weaker than Cuddy’s going around claiming. If she were going around saying, “I have a theory,” I’d have no problem. My beef is the claim that that her work is “grounded in science” or that her theory should be presumed true (as implied, for example, by Cuddy’s statement, “The better we understand it, the better we can use it.”). It would fine, in my opinion, to have commenters who say something like, Power pose makes a lot of sense to me, maybe it works for some people and not others, etc. And maybe some other commenters describing situations where power pose does not work. My real point, I guess, is that such hypothetical shooting-from-the-hip commenters would be adding about as much as Cuddy, Carney, and Yap.

      • sbk says:

        If she claimed she had a theory — I would want to know her definition or criteria for that which constitutes a scientific theory. I see no credible scientific theory in her (or directing her) “work”.

        This, of course, is sadly the case for most contemporary social science. Stipulated constructs accorded all forms of causal potencies to ground demonstration- rather than theory-driven empiricism. It is all rather pathetic.

        • sbk says:

          As for “my claims are grounded in science” — she has no serious conception of the term “science”. For most social scientists (ahem), “method = science”. In addition to the failure to note that this is a restatement of a long abandoned philosophy of science (logical positivism; though it apparently flourishes in psychology) it is a blatant confusion of necessary vs sufficient conditions: method is necessary but hardly sufficient.

          But I am reasonably certain Cuddy and her ilk have no idea about such issues (we credential our students well beyond their capabilities and then they become the credential givers). This is not only sad for actual science, but it is an abuse of a meaningful approach to nature and a serious disservice to the general reading public to associate such dreck with actual science.

          I, for one, have given up and cannot wait for retirement from this inanity.

  3. WB says:

    The difference between the gullibles and the skeptics is whether you have the incentive to hype or not. What incentive do you, Andrew Gelman, have to promote some absurd empirical finding? Not much. What incentive does a journalist or book editor have? Plenty. They have to fill space with something interesting enough to attract readers. The more the better. If no one reads or responds to your blog posts, who cares? You still have your day job. The same can’t be said of the gullibles.

  4. Rahul says:

    A lot of this is about specific *areas* of science isn’t it? e.g. Smart Outsiders have learnt to view “novel” claims from Psych. or Poli. Sci. with skepticism. Or the next study that comes along and says “Eating X will make you live longer”.

    But I don’t think people view novel finding in all other fields with the equal suspicion. Which is good. e.g. When the headlines go gaga about some novel math result I’ve rarely been disappointed when I dig deeper into the story.

    This is less about insider vs outsider. Or Science vs arxiv. Or NHST vs Bayesian.

    This is more about how certain fields have systemically destroyed their own credibility.

    • Garnett McMillan says:

      Well said. It’s amazing how those “certain fields” exert so much effort in justifying their own relevance.

    • jrkrideu says:

      True but there was cold fusion and faster-than-light whatevers though the last was not a claim but a what the heck is this report.

      Still both got good press coverage.

      • Andrew says:

        Jkrideau:

        My impression when cold fusion came out was that there was both excitement and skepticism, which was appropriate. Pr (they really found something) was low (even at the time of the initial press release, there was caution in interpreting the results). But E (social change | they really found something) was huge. So the net E (social change) was high, hence the excitement.

        Power pose is different: it’s my sense that even the news organizations that are covering it uncritically, are treating it more as a curiosity and an inspiring story than as a world-changer.

      • Rahul says:

        @jkrideu

        Maybe. But I think those were exceptions. On average, some fields have a very bad track record at this.

  5. Rahul says:

    I think TED is responsible for some bit of this. It has glorified the art of getting your audience to gasp in awe in 10 minutes. People start mistaking good showmanship for good science.

    If we are going to blame Nature / Science for the hype TED Talks are 10x as bad.

  6. Jacobian says:

    I’m very skeptical of the claim that Slate readers (of which I am one) are able to discriminate good science from bad. I think this is a clear case of hindsight bias in science. People always overestimate how “obvious” an answer was once they are told it’s true, regardless of whether the answer is actually true or not!

    The same people who think after reading Slate that the study was obviously worthless thought after watching the TED talk that the study’s results were obvious too. It’s an insidious bias that I notice in myself all the time, the only slow cure is to try and guess the results ahead of time and see how poorly I do. Ironically, the power pose thing is a rare occasion where I didn’t fall for it: when a friend sent me a link to the TED talk I offered to bet her $100 that it wouldn’t replicate once I saw the sample size of N=42.

    And speaking of power, how much does it cost to have a subject sit in a room for 15 minutes and collect a saliva sample? I can’t believe that at the ultra-rich Harvard Business School, Cuddy was so funding-constrained that she couldn’t afford more than 42 subjects. Based on all recent experience in psychology research, if someone is fishing for a significant effect coming from a minor intervention with a sample of less than a few hundred I don’t even care to see their results to call BS.

    • Andrew says:

      Jacobian:

      What you say in your first two paragraphs could well be true. But, if so, that suggests that the credulity of which we speak, that led to belief in the Cuddy et al. study, is quite shallow. People are willing to believe all sorts of things about the world, given that the belief doesn’t cost them anything. Then they’re willing to make the equally costless decision to turn around and say, yeah, it was B.S. all along.

    • Jeff Walker says:

      I agree. Probably every one of my smart friends that is not especially tuned into skepticism believes in some sort of non-evidence based medicine/health claim. It doesn’t even occur to them that it might lack reliable evidence or even that some of the consequences of the claims are anti-scientific.

    • Ricardo Silva says:

      I think it indeed boils down to that, not far from the discussion by Duncan Watts in his popsci book “Everything is Obvious”.

    • Jameson Quinn says:

      I, for one, could have told you that the Slate commenters would have reacted like this. I mean, isn’t it obvious? Slate is all about being contrary, so they probably watch TED talks and believe the opposite. (Of course, that tendency to believe the opposite could have an interaction effect with the third derivative of the popularity of the talk, or what I call the “trendy jerk”.)

  7. Another curious says:

    How would you quantify the accuracy of commenters? Also: AG, it might not matter, but your website seems to be HTTP not HTTPS.

  8. Stan says:

    I’m a skeptic about a lot of research, including published papers in top journals, but I’ll attempt to mount a defense here. First, we know from a large body of research that the placebo effect works in many circumstances. Placebo effect sizes often are large. Why can’t power posing work as a sort of placebo effect for interviews? Second, we know from research that self-fulfilling prophecies are real effects. Why can’t power posing serve as a self-fulfilling prophecy, building self-efficacy to perform well? Third, her research on power posing was published in Journal of Applied Psychology, one of the top ranked and most impactful journals in the area of applied psychology. The journal is highly regarded, very rigorous, and the rejection rate is over 90% for submitted papers. The reviewers are no slouches, the editors want real science published, and reviewers can be harsh when critiquing submitted work if it is of poor quality. If her work is published there then I tend to think it is a real effect – unless it is a statistical anomaly or the data are contrived. Fourth, there are hundreds of studies that have shown our emotions follow our physiological displays. For example, if you get people to smile, they will feel happier, even if you don’t tell them you are asking them to smile (e.g., you tell them you are conducting a study on facial muscles and you want them to pull the corners of their mouths upward). As another example, if you inject people with adrenaline and label it fear for them then people feel afraid. If you label it excitement, they feel excited. We often see physiology influencing attitude and emotion, just as we see attitude and emotion influencing physiology.

    Personally, I believe the effect is probably real – consider the converse – before an interview you should not act confident; you should fold in upon yourself and act afraid; you should be critical of yourself and shrink back in your posture. How would that be better? I think asking people to display the positive aspect of these behaviors is likely to make them more confident in a interview, a situation already likely to provoke stress and anxiety.

    Do I know the effect is real? I’m unsure. The fact that the finding wasn’t replicated concerns me. With a sample size less than 100 the significant effect could merely be sampling error. On the other hand, if the finding was significant with such a small sample then the effect size for the sample had to be large (although it could still be sampling error). If I had the time I would design another experiment trying to identify potential interactions that strengthen or weaken the effect. Perhaps there’s another variable that is unaccounted for that explains why it was found in one situation but not in another (there are many reasons it might not be replicated). Alas, no time, too many studies already in queue.

    • Andrew says:

      Stan:

      This could all be. However, I think the theoretical arguments could go in either direction. You write, “consider the converse – before an interview you should not act confident; you should fold in upon yourself and act afraid; you should be critical of yourself and shrink back in your posture.” But I don’t think that’s a fair comparison. A better comparison might be, “consider the converse – before an interview you should act confident; you should fold in upon yourself and be coiled and powerful; you should be secure about yourself and be ready to spring into action.” It would be easy to imagine an alternative world in which Cuddy et al. found an opposite effect and wrote all about the Power Pose, except that the Power Pose would be described not as an expansive posture but as coiled strength. We’d be hearing about how our best role model is not cartoon Wonder Woman but rather the Lean In of the modern corporate world. Etc.

      The claim is not just that some manipulation of posture will have some effects. The claim also has specifics, and these are what don’t seem to replicate. Similarly, the power of positive thinking is already out there; Cuddy is claiming more than that.

      That said, I’m certainly open to the possibility that there’s something there. I just don’t think that grabbing statistically significant comparisons from small noisy studies is the best way to learn here.

    • Steve Sailer says:

      Arnold Schwarzenegger was a careful student of the interplay of posture, self-confidence, and success. Donald Trump has kept using the posture drilled into him at military school. I wouldn’t be surprised if overachievers tend to have better posture than underachievers: that seems like a hypothesis that could be studied. (Of course, that wouldn’t answer the question of which way causality flows, but it would be a start.)

      More generally, how exactly are we supposed to test motivational techniques that are premised on subjects believing that they work? If you pay $100 to attend a motivational workshop at which a very confident-sounding Arnold Schwarzenegger teaches the packed audience the posture that helped him intimidate Lou Ferrigno at a cocktail party before the start of a 1970s bodybuilding competition and assures you that it will work for you too in your next job interview, can we really replicate that experience in a psychology laboratory by having a neutral-sounding grad student read instructions from an index card?

      • Keith O'Rourke says:

        In practice no, but the same is true in clinical trails re: the doctor assuring me this should help if I follow instructions closely being replaced by no one knows if this works and you might be getting an inert placebo anyway.

        We need to make do with what we can learn (given current economic and ethical constraints) even though what we can’t speak of, we can’t whistle either but we can obviously turn it into publications, speaking engagements, tenure and promotion.

        Though over all, I am going to agree with Andrew that folks are more aware of problems of published claims and especially the wide extent of these than they were in say 2005. Perhaps mining comments on this blog over time might document some of that emergence.

  9. roy says:

    Totally off topic but given how old and slow and brain-dead I have become, I find it harder and harder to figure out the link between the pictures and the posts. Okay I have a guess at this one, but even then why the Norah Jones version and not the original Kinks version? Not that I don’t like Norah Jones, and her bio is certainly more interesting than most (if you don’t know Norah Jones’ father is Ravi Shankar). Sort of like that one with Ira Glass and and J.D. Salinger. (Un)luckily, and much to my future detriment, I read a lot of Salinger as a kid, but if i hadn’t I would never have gotten that one.

  10. Shravan says:

    The best thing about this post was my discovering the music of Norah Jones. I was wondering why she looks so familiar (that nose) and now I know.

    Andrew, the very first comment I saw on Slate accuses you and your co-author of manspreading. I wouldn’t characterize this as skeptical about Cuddy et al’s work. I saw other inane comments there too. Reading the comments reminded me again why I don’t read comments (except on this blog).

Leave a Reply