Someone pointed me to this program of the forthcoming Association for Psychological Science conference:
Kind of amazing that they asked Amy Cuddy to speak. Weren’t Dana Carney or Andy Yap available? What would really have been bold would have been for them to invite Eva Ranehill or Anna Dreber.
Good stuff. The chair of the session is Susan Goldin-Meadow, who’s famous both for inviting that non-peer-reviewed “methodological terrorism” article that complained about non-peer-reviewed criticism, and also for some over-the-top claims of her own, including this amazing statement:
Barring intentional fraud, every finding is an accurate description of the sample on which it was run.
This is ridiculous. For example, I think it’s safe to assume that Reinhart and Rogoff did not do any intentional fraud in that famous paper of theirs—even their critics just talked about an “Excel error.” But their findings were not an accurate description of their sample. Similarly with Susan Fiske and her t statistics of 5.03 and 11.14 which were actually 1.8 and 3.3. No intentional fraud, just an error. But the findings were not an accurate description of the sample. Or Daryl Bem’s paper where he reported various incomplete summaries of the data. Even if each summary was correct, his “findings” were not: they were selections of the data which, despite Bem’s claim, do not provide evidence of ESP. They don’t even provide evidence that the students in this particular sample had ESP. Or, if you want an even cleaner example, consider Nosek’s “50 shades of gray” study where Nosek and his collaborators themselves don’t believe that their findings were an accurate description of their sample.
Or, hey, here’s another one, a paper that claimed, “That a person can, by assuming two simple 1-min poses, embody power and instantly become more powerful has real-world, actionable implications.” This is the concluding sentence of the abstract. Actually, though, there were no measurements of power in that paper, and the reported finding was not an accurate description of the sample on which it was run. For it to have been an accurate description, there would’ve had to be some measure of power on the participants. But there was no measure of power, just feelings of power. Which I think we can all agree is not the same thing. No fraud, intentional or otherwise, just a plain old everyday journal article where the thing being stated in the abstract is not the thing that was done in the study.
I have no plans to be at this conference, which is too bad, as this session sounds like lots of fun. Maybe they’ll feature some of Marc Hauser’s famous monkey videos, and they can film it live as a Ted talk. The whole thing should be a real himmicane!
P.S. In all seriousness, do these people even read what they’ve written? (1) “Barring intentional fraud, every finding is an accurate description of the sample on which it was run”? (2) “Instantly become more powerful”?? (3) “the replication rate in psychology is quite high—indeed, it is statistically indistinguishable from 100%”???
Look, we all make mistakes, and there are also legitimate differences of opinion. The data don’t really support the claims that married women were more likely to support Mitt Romney during that time of the month, or that beautiful parents are more likely to have girls, or that Hurricane Missy will be more damaging than Hurricane Butch. But, sure, these hypotheses, and their opposites, are all possible. It could be that Cornell students have ESP, or that power pose makes you weaker, or all sorts of things. So even if I think people are overinterpreting their evidence and even being blockheaded in their interpretation of questionable data and in their resistance to other sources of evidence, I can see how they could make the claims they’re making. Daryl Bem may ultimately have the last laugh on all of us. But statements like (1), (2), and (3) immediately above: they just make no sense in the context in which they’re written.
Psychology is not just a club of academics, and “psychological science” is not just the name of their treehouse. It’s supposed to be for all of us—I’m speaking as a taxpayer and citizen here—and I think the scholars who represent the field of psychology have a duty to write clearly, to avoid false statements where possible, and to put themselves into a frame of mind where they can learn from their mistakes.
P.P.S. Maybe also worth repeating this bit:
I’m not an adversary of pscyhological science! I’m not even an adversary of low-quality psychological science: we often learn from our mistakes and, indeed, in many cases it seems that we can’t really learn without first making errors of different sorts. What I am an adversary of, is people not admitting error and studiously looking away from mistakes that have been pointed out to them.
We learn from our mistakes, but only if we recognize that they are mistakes. Debugging is a collaborative process. If you approve some code and I find a bug in it, I’m not an adversary, I’m a collaborator. If you try to paint me as an “adversary” in order to avoid having to correct the bug, that’s your problem.
P.P.P.S. In the first version of this post, I mistakenly labeled this session as being in the American Psychological Association conference. It’s actually the Association for Psychological Science. I apologize to the American Psychological Association for my error. It’s no big deal, though, it’s not like anybody’s being tortured or anything.