A correspondent pointed me to this Freaknomics radio interview with Thomas Gilovich, one of the authors of that famous “hot hand” paper from 1985, “Misperception of Chance Processes in Basketball.” Here’s the key bit from the Freakonomics interview:
DUBNER: Right. The “hot-hand notion” or maybe the “hot-hand fallacy.”
GILOVICH: Well, everyone who’s ever played the game of basketball knows you get this feeling where the game seems to slow down. It becomes easier, or you almost don’t even have to aim that carefully. The ball’s going to go in. It’s one of the most compelling feelings that you can have. And it turns out if you statistically analyze people’s shots — whether it’s professional games, college basketball players shooting in a gym, although the feeling exists when you make several shots in a row — you will feel hot. That feeling very surprisingly doesn’t predict how you’re going to do in the next shot or the next several shots — the distribution of hits and misses in the game of basketball looks just like the distribution of heads and tails when you’re flipping a coin. although of course, not every player shoots 50%. Very few of them do.
That’s wrong. The distribution of hits and misses in the game of basketball does not look just like the distribution of heads and tails when you’re flipping a coin.
In their 1985 paper, Gilovich et al. thought they found that the distribution of hits and misses in the game of basketball looked just like the distribution of heads and tails when you’re flipping a coin. But they’d made a couple mistakes. Subtle mistakes, but mistakes nonetheless. Miller and Sanjurjo explained the problems:
1. The simple estimate of correlation, computing for each player the empirical frequency of hits right after a sequence of hits, minus the frequency of hits right after a sequence of misses, is biased. And the bias is large enough to make a difference in data sequences of realistic length. (Hence Gilovich was making a mathematical error when he said, “Because our samples were fairly large, I don’t believe this changes the original conclusions about the hot hand.”)
2. Whether you made or missed the last couple of shots is itself a very noisy measure of your “hotness,” so estimates of the hot hand based on these correlations are themselves strongly attenuated toward zero.
Combining 1 and 2, even large hot-hand effects will show up as zero in Gilovich-like studies.
None of this is new, and Gilovich is aware of these criticisms. Yet in this Freakonomics interview, he chose to do a full Cuddy, as it were, and not even acknowledge the demonstrated problems with his work.
That’s too bad, especially given that his 1985 paper has an important place in the history of our understanding of cognitive illusions, and its errors are subtle. There’s no shame in making a mistake.
What’s with the refusal to acknowledge error? Is it something in the water at Cornell University? Is there some bar in Ithaca where Daryl Bem, Brian Wansink, and Thomas Gilovich go to complain about the fickle nature of the science press?
This might be unfair to Gilovich, though. All I have to work with is the transcript of the interview, and for all I know he went on like this:
GILOVICH: Actually, though, we were wrong! Miller and Sanjurjo showed that our data were consistent with a hot hand all along. The problem was that our correlation-based estimate, which seemed so reasonable, was actually a biased and noisy estimate of the hot hand. My bad for not noticing this back in 1985. But, hey, both problems—the bias and the variance—were subtle. In my defense, the brilliant Amos Tversky didn’t catch these problems either.
DUBNER: Yup, almost everyone missed it. As late as 2014, Gelman was pooh-poohing the idea that the hot hand could amount to much. Anyway, it’s great to have the opportunity to correct the record now, here on Freakonomics radio!
And then maybe that last exchange got cut, for lack of space.
What’s goin on here?
Seriously, though, I see two problems here.
First, Gilovich has seen that serious scholars have criticized his hot-hand claims. The criticisms are real, and they’re spectacular, and it’s not like Gilovich has any refutation, he’s just bobbing and weaving. Then a reporter comes to him for a feature story and he presents it completely straight, as if it’s 1985 again, Run DMC on the beatbox, the Cosby Show on TV, Oliver North sending weapons to the Ayatollah, and New Coke in every 7-11 in the country. What kind of attitude is that?
Second, the Freaknomics team didn’t think to even check.
OK, this thing jumped out at me because I’m a statistician and I’ve spent a lot of time thinking about the hot hand. But you don’t need to be an expert to know about this.
Suppose you’re running a radio show and you’re going to interview someone about a particular topic. (And this one didn’t come as a surprise: if you check the transcript, you’ll see that it’s Dubner, not Gilovich, who first brings up the “hot hand.”)
What do you do? You quickly research the topic? How? First step is Google, right?
Here’s what happens when you google *hot hand basketball*:
(I ran the search in anonymous mode so I don’t think it’s using my own search history.)
– First item is the Wikipedia entry which right away describes the hot hand as an “allegedly fallacious belief” and includes an entire section on recent research in support of the hot hand (i.e., disagreeing with Gilovich’s claims).
– The second item is a news article saying that the hot hand is real, and featuring the work of Miller and Sanjurjo.
– The third item is the Gilovich et al. paper from 1985.
– The fourth item is another news article saying the hot hand is real.
So, it would be hard to google the topic and not come to the conclusion that Gilovich’s claims are, at best, controversial.
Yet this did not come up in the interview. The Freakonomics team was not doing their job. Either they showed up for an interview without doing the simplest Google search, or they did know about the controversy but let their interviewee get away with misrepresenting the state of knowledge.
Too bad. It would’ve been easy for the interviewer to follow up with something like,
DUBNER: That all sounds good, but in preparation for this interview, I looked up the hot hand and I saw that your claims are controversial. Nowadays a lot of people are saying that the hot hand is real, and there seems to be general agreement that the conclusions from your 1985 paper were a mistake, turning on a subtle probability error.
Then Gilovich could reply, maybe something like this:
GILOVICH: Yeah, I’ve heard about this. I’m not a stats expert myself so, sure, I could believe that there were some subtleties in the estimation and the power analysis that we missed. Still, when you look at our data, players and spectators seem sooo convinced that the hot hand is huge, and even if it’s real, it’s hard for me to believe that it’s big as people think. So I’ll hold to my larger point that people overinterpret random events. That’s the real message of our project.
DUBNER: OK, now let’s talk for just a minute about your work in happiness or hedonic studies. . . .
OK, having written this, I can see how Dubner might not have wanted to include such an exchange in the interview, as it’s a bit of a distraction from the main point. But, really, I think you have to do it. The exchange doesn’t make Gilovich look so good, as it forces him to backpedal, but ultimately that’s Gilovich’s fault as he was the one to overstate his conclusions in the interview.
I guess this happens in political and celebrity interviews all the time: the interviewee says something false, and then the interviewer has to choose between letting a false statement go by unchallenged, or else blowing the whistle and losing the trust of the interviewee.
But when you’re interviewing a scientist, it should be different.
Look. When I talk about my own research, I try to be complete and open but I’m sure that I do some hype, I don’t dwell on my failures, and there must be times that I don’t get around to mentioning some serious criticisms. I should do better. If I’m being interviewed, I’d appreciate the interviewer calling me on these things. Not that I want to be hassled—I’m not planning to go on Troll TV anytime soon—but if I say something false or incomplete, that’s on me. I’d like to have a chance to explain, in the way that the (hypothetical) Gilovich did above.