X points me to this news article by George Johnson regarding the hot hand in basketball. Nothing new since the previous hot hand report (also Johnson follows the usual newspaper convention of not citing the earlier article in the Wall Street Journal, instead simply linking back to the Miller and Sanjurjo article as if it had not been reported before), but he did get in an interview with Thomas Gilovich, one of the authors of the original hot hand paper from thirty years ago:
Dr. Gilovich is withholding judgment. “The larger the sample of data for a given player, the less of an issue this is,” he wrote in an email. “Because our samples were fairly large, I don’t believe this changes the original conclusions about the hot hand.”
It’s really too bad to hear Gilovich write this. Isn’t it always the way, though: people don’t like to admit that they made a mistake, even an honest mistake.
Anyway, I think what happens is that if you reanalyze the Gilovich et al. data carefully you will find some evidence for a hot hand. The difficulty is not that their samples are so large but that they are so small. The hot hand effect is subtle to detect, and binary data are weak, so you need lots of data to estimate it.
Here’s a quick calculation, just to give you an idea.
Suppose Pr(success following a miss) is 0.5 and Pr(success following a success) is 0.55. I choose this an effect size that would be nontrivial but still could be difficult to detect.
Now suppose you have data from 1000 shots. 1000 is a lot, right? And let’s for a moment forget about the now well-known measurement bias issue.
So you have something like 500 successes and 500 misses, and you can take the difference in success rates and estimate Pr(success following a success) – Pr(success following a miss). The standard error of your estimate is sqrt(.5^2/500 + .5^2/500) = 0.03. And you can’t reliably detect a 5 percentage point difference if your measuring instrument has a standard error of 0.03.
And then you bring in the issue of the bias that Miller and Sanjurjo discovered, which depends not on your total sample size but on your sample size per player. You can see that even a seemingly small bias such as 0.02 can completely destroy any hope of discovering anything here. And this doesn’t even get into the issue that just looking at the previous shot tells only part of the story. In the case of Gilovich et al.’s data, if you just look at certain comparisons and don’t adjust for the bias, you won’t see any evidence for the hot hand—that’s what happened in that 1985 paper, as Miller and Sanjurjo explained.
Now let’s bring it back to Gilovich’s statement that this doesn’t change their original conclusions. I think he’s half correct on this—or maybe more than half correct.
My reasoning goes as follows. Gilovich et al. reported three things in their paper. First, that there’s no evidence for any hot hand in basketball shooting. I think they were wrong on this one; it does seem that, if you look at basketball data carefully, you do see evidence for a hot hand, it’s just that the original analyses were hampered by bias and variance issues. Second, Gilovich et al. report that basketball fans view the hot hand effect as huge, much larger than any such effect in reality. I find their results convincing on that point. It does seem that once a person focuses on any particular effect, once he or she believes it to be nonzero, there’s a tendency to overrate its importance. I guess that’s related to the “availability heuristic” studied by Amos Tversky, another author of that hot hand paper. Gilovich et al.’s third finding is that people perceive a hot hand even if you give them completely random sequences. That appears to be true too, even if not newsworthy on its own.
So we can say that two out of the three findings of Gilovich et al. (1985) remain valid at some level. Not bad for a 30-year-old paper and no cause for embarrassment on Gilovich’s part. I think he’d be better off being a bit more gracious and saying to the next reporter something like, “Yeah, we didn’t catch that bias, and Miller and Sanjurjo made me realize that we were wrong to claim there was no hot hand. We’re glad that our paper continues to get attention 30 years later, and I hope people won’t forget the other points we made, which have withstood the test of time.”