“Suppose Jack randomly selects a flip that is immediately preceded by a heads, what is the chance the selected flip is a heads?”

When you select flips for analysis based on outcomes of previous flips, you generate a bias.

]]>Suppose Jack chose flip 1, there are 8 possible sequences with H???.

What is the probability it came from sequence HTTT? Is it 1/8?

No, Pr(HTTT|Jack chose flip 1)= Pr(Jack chooses flip 1| HTTT)Pr(HTTT)/Pr(Jack chooses flip 1)=1*(1/14)/(1/3)=3/14>1/8.

It is easy to show Pr(HTTT|Jack chose flip 1)>Pr(HHHT|Jack chose flip 1).

And so it is each to show Pr(flip 2 is H | Jack chose flip 1)= Pr(HHTT|Jack chose flip 1)+Pr(HHHT|Jack chose flip 1)+Pr(HHTH|Jack chose flip 1)+ Pr(HHHH|Jack chose flip 1)<1/2.

]]>While it is true that GVT 1985 do not average across sequences in their analysis, if you want to compute the bias, this is what you need to do, i.e. average across all the possible sequences in the sample space.

To make more clear the selection bias in GVT 1985, here is a copy-paste of a probability puzzle that we have been sharing, which illustrates the bias:

Jack flips a coin 4 times, and then selects one of the heads flips at random and asks Jill: “please predict the next flip.”

Jill asks: what if you selected the final flip, how can I predict the next flip?

Jack responds: I didn’t select the final flip, I selected randomly among the heads flips that were not the final flip.

Jill asks: okay, but its possible there aren’t any heads flips for you to select, right?

Jack responds: I just told you I selected a heads flip!

Jill asks: fine, but what if there weren’t any heads flips for you to select? what would you have done in that case?

Jack responds: I wouldn’t be bothering you with this question.

Jill says: okay then, I predict tails on the next flip.

Jill has made the correct choice.

For most of us, our intuition says that it shouldn’t matter what Jill predicts, there is a 50% chance of a heads on the next flip.

This intuition is wrong. Given the information that she has, Jill knows there is a 40.5% chance of heads on the next flip.

In the long run, if Jill predicts tails every time that Jack asks her to predict, Jill will be right 59.5% of the time.

This is the selection bias: When analyzing a finite sequence of coin flips, if you select for analysis a flip that follows a heads, you are more likely to be selecting a tails.

]]>Thanks very much for pointing this out! I just got around to reading the George Johnson article today and was baffled by why he thought it made sense to average across the 16 trials. It does not make sense and i do not think this was the basis for the previously published analyses. Has anyone pointed this out to the Ney York Time and George Johnson?

Thanks,

-Walt

]]>Good question.

In the seminal 1985 paper of Gilovich, Vallone and Tversky (GVT) conducted a controlled shooting study (experiment) with the Cornell University men’s and women’s basketball teams as a “…method for eliminating the effects of shot selection and defensive pressure,” effects (confounds) which hampered their interpretation of the Philadelphia 76ers game data that they had discussed in an earlier section of the paper. On page 22 of our paper here: http://ssrn.com/abstract=2627354, we focus on Table 4, which pertains to the Cornell data (Study 4), and not Table 1, which pertains to the 76ers data (Study 2).

In short, Study 2 of the original paper has a severe endogeneity problem, which was pointed at quite early, e.g. on the first page of Avinash Dixit and Barry Nalebuff’s Thinking Strategically book they explain clearly the problem of strategic adjustment (see it here: http://bit.ly/1eXxdI3 ). Scientifically speaking, this is why Study 4 is so important, because it does not suffer from these issues. If you can show that there is no evidence of hot hand shooting in Study 4, it is reasonable to infer it doesn’t exist. This is also why great hay has been made about the no-effect result in the NBA 3 point study of Koehler & Conley (2003). When correcting for the bias, and looking at a lot more NBA 3 point data, we have also have come to the opposite conclusion see here: http://ssrn.com/abstract=2611987.

This is probably TMI, but if you have more curiosities with regard to whether we should be looking at game data to infer the *magnitude* of the hot hand effect, please see this link to a previous comment on a Gelman post: http://andrewgelman.com/2015/07/09/hey-guess-what-there-really-is-a-hot-hand/#comment-227641

best

-j

So the hot hand effect could be larger than observed in the data and partially offset by a shot selection effect where players select lower probability shots because the players themselves believe the hot hand makes them more likely to make those difficult shots.

]]>Pro tip: referring to a study as a “study” doesn’t enhance your credibility.

]]>As Mr. Johnson lays out in his table, there are 48 possibilities for the first three flips of four coins. Of these flips, 24 are heads and 24 are tails. Heads are followed by tails 12 times and by heads 12 times, or 50%, not the 40.5% that Mr. Johnson suggests. The mistake that Johnson made in his analysis is by averaging the percentages across each of the 14 trials without weighting them for the number of heads in each trial. Johnson weights each of the trials equally, regardless of the number of heads and thereby calculates an average of heads of only 40.5%; he then incorrectly concludes that tails follow heads 60% of the time. This conclusion defies both logic and the mathematics of probability as this would mean that heads would also follow tails 60% of the time and the sum would be 120% (rather than 100%), which is clearly impossible.

The Miller Sanjurjo “study” will be quickly dismissed and discredited.

]]>“It does seem that once a person focuses on any particular effect, once he or she believes it to be nonzero, there’s a tendency to overrate its importance…”

This is right, and the specific concept you’re looking for is the Focusing Illusion. Here’s Kahneman’s overview of it:

]]>If a player is stinking up the joint, he gets subbed out.

The audience, boxscore, sportsvu only observes non cold players

]]>How much greater is the individual game-to-game variance than predicted by the binomial model (take the overall average percentage as p)? This data surely exists, is it collected for easy access anywhere? In fact, if the variance is close to np(1-p) that would indicate some process is cancelling out those other sources of variation.

]]>Nicely put. I think this gets back to other discussions about how we view science and what a published paper really means. Some people seem to take printed results as absolute truth and feel as though they must defend them. Others are more forthcoming about the uncertainty associated with their findings (e.g., evidence based upon a single sample, small sample, noisy samples, coding mistakes, etc.). When viewed from the latter perspective, new or contradictory results are not threatening but simply updating our understanding of the world. Scholars are going to make mistakes, particularly when studying complex things.

]]>Gilovich et al. consider three scenarios: field goal shooting, free throw shooting, and shots in practice, which would seem to be in decreasing order of variance within players over time.

]]>That is EXACTLY the finding in this paper: http://www.sloansportsconference.com/wp-content/uploads/2014/02/2014_SSAC_The-Hot-Hand-A-New-Approach.pdf

Once accounting for this and adjusting for variation in shot difficulty — which the papers Andrew cites above do not — they find evidence of a substantively large hot hand.

]]>Besides all the variety caused by different shots taken during a game i.e. some difficult, some easy, enforced by the opposition, it could also be that players who believe they have hot hands try more difficult shots because they think they have a greater chance of success while hot. For example a player may attempt 3 point shots when hot but never attempt when not. The overall rate of success is the same but more risky shots may be made.

]]>Except one day. That day, for some reason, for a while, I had the rythym and accuracy. I shot and got a basket, then another. I tried not to think about it, as I was convinced that it’d never work. I got another. By the fifth basket, my team didn’t even bother joining me in the opponents half, and I still scored. At that point I did think about it, pissed myself laughing, and didn’t score for the rest of the day.

I’m sure if you can control for all the physical and mental variation you’d disprove hot hand, but I am also sure they influence shooting ability.

]]>But now the NBA releases amazing data that uses stop-gap photography of every game to create a geo-coded dataset that accounts for the locations of all players, the game situation, etc, when each shot is taken. This allows researchers to actually account for key factors that change the underlying difficulty of the shots. If you use this data, there appears to be a large magnitude hot hand:

http://www.sloansportsconference.com/wp-content/uploads/2014/02/2014_SSAC_The-Hot-Hand-A-New-Approach.pdf “We then turn to the Hot Hand itself and show that players who are

outperforming will continue to do so, conditional on the difficulty of their present shot. Our

estimates of the Hot Hand effect range from 1.2 to 2.4 percentage points in increased likelihood

of making a shot.”

I don’t know why this isn’t the paper Andrew is highlighting in his posts. It actually accounts for the in-game variation in situational difficulty of shots in a way that the older papers he’s discussing cannot.

]]>If it is a tiny, subliminal effect it becomes akin to asking if a tree falling makes a sound when no one is around to listen.

]]>This is just so reasonable that I don’t understand how this is still a point of debate. Fundamentally, if you’ve ever actually played basketball (and I have), you realize that the act of shooting a basketball is not of the same type as flipping a coin, drawing a card, spinning a roullette wheel, pulling the lever of a slot machine. Its a completely different category of activity. In one set of activities the probability of success for each draw = the average probability of success across a large-N of draws. In the other set of activities the probability of success for each draw has high variance, such that it != the average probability of success across a large-N of draws in any particular draw.

Shooting a basketball a complicated physiological motion that you don’t do the same way each time. For a basketball shot, the probability of the shot going in on any one instance is not the probability of the shot going in on average. Why? Some days your legs are tired and you don’t jump, other days you jump and shoot better. Some days you are super nervous (perhaps its a tense, close game situation), other days you are calm. Some days you are rushing your motion, other days you aren’t rushing your motion. Some days there’s a man right on you, some days there isn’t. All of these things affect the probability of the shot going in, creating a high variance in probability across shots. None of this applies to rolling dice or flipping a coin. There the per draw probability of the event is stable, and always equal to the average probability across many draws.

Given that this is painfully obvious to anyone who has ever shot a basketball, what Andrew writes above just makes so much sense that it remains stunning to me that this is still a thing that people debate. Yes, the hot hand almost surely exists. But, yes, it could be pretty small in magnitude even if its real. And yes, people make perceptual errors and claim to see the hot hand even when it doesn’t exist. But that they make those perceptual errors, and that its not an enormous effect, doesn’t mean it isn’t a real thing.

]]>