Skip to content

High five: “Now if it is from 2010, I think we can make all sorts of assumptions about the statistical methods without even looking.”

Eric Tassone writes:

Have you seen this? “Suns Tracking High Fives to Measure Team Camaraderie.” Key passage:

Although this might make basketball analytic experts scoff, there is actually some science behind the theory.

Dacher Keltner, Professor of Psychology at UC Berkeley, in 2015 took one game of every NBA team at the start of the year and coded all of the fist bumps, embraces and high fives.

“Controlling for how much money they’re making, the expectations that they would do well during that season, how well they we’re doing in that game,” Keltner said. “Not only did they win more games but there’s really nice basketball statistics of how selfless the play is.”

Keltner found that the teams that made more contact with each other were helping out more on defense, setting more screens, and overall playing more efficiently and cooperatively.

The Suns tracking of high fives and the like has been in the news this week. ​I tried to find recent publications involving Keltner on this topic, and maybe I missed them, as so far I’ve only found this from 2010 and some press coverage from roughly 2010 and 2011. Google Scholar also doesn’t seem to know of any recent NBA-related publications by Keltner on the topic. But there is a two-minute YouTube video (the “Do High Fives Help Sports Teams Win?” from the subject line of this email) from Sept. 2015, on what appears to be a YouTube channel affiliated with the University of California, Berkeley​.​​

(​YouTube suggested this—“NBA’s Top 10 Missed High Fives”—as the next video for me to watch! (li​nk)​ Boom!)​

I sent this along to Josh Miller, who wrote:

Sure is interesting.

I’d bet the perfunctory low-fives after a missed free throw don’t predict much.

The thing I’d worry about here is reverse causality, do they address that?


I didn’t read the paper (yet), but my take on the short YouTube video is that it doesn’t do much in this regard other than mostly elide the causal issue, but give some hints that it’s ‘high-fives causes winning/good play’, as opposed to ‘winning/good play causing high-fives,’ or some other alternative. But maybe I should watch it again. (It’s not unlike the admittedly complex case in baseball, where you might have an argument that winning causes high payrolls, as opposed to the conventional wisdom that it’s the other way around.)


Just took a quick look: It says he studied only one game at “the start of the year,” as a predictor for an *entire* season, in this case reverse causality shouldn’t be too much of an issue. It’s also says he controlled for salary, team expectations (not sure how, betting odds?), and how well they were doing in that particular game. Now the crucial issue is that it was only 2015 data and there are only 30 teams. Somehow I am skeptical. I bet he has some degrees of freedom in his controls.

Even so, let’s say it is robust and true every year, given his controls. If this measure of comraderie has additional predictive power beyond his controls, is it comraderie per se, or is it excitement about insider information that betting markets don’t yet have? How good are his controls for expectations and performance that game?

I just googled, it looks like the paper is from 2010 not 2015:

And Eric did provide the link to a version of the paper. We can probably assume the published version isn’t much different.

Now if it is from 2010, I think we can make all sorts of assumptions about the statistical methods without even looking. I browsed quickly. Andrew- you would have a field day with this paper! Higher paid players touch each other more, because you know, status! But seriously, it’s not in Science, Nature of PPNAS, so it’s a bit too easy to tear this apart on your blog, no? The only way I see it working is if you do a self-aware post with a picture of some fish swimming around in a barrel.

I spent just 2-5 min reading how they coded the data and analyzed it. They control for some obvious confounds, but one at a time, and not all at once, and then pile on one significant p-value after another, in an accumulating evidence sort of way, even though each test has an obvious confound. At the end they do perform an all-at-once regression, but there are bigger issues, aside from the fact that the estimates are not precise—remember we have just 30 teams, chasing noise anyone? The big problem (among many) is measurement error: e.g. they control for expected team performance during the season using a binary variable 1=some executive thinks they will make the playoffs or something, and -1= they don’t. No mention of using betting odds, no mention that some of these games are already 2-months into the season and comraderie could reflect the team chemistry and performance up to that point (they control for team performance only in a single game), yes reverse causality is an issue.

There is no way to replicate their analysis without their data because they don’t say which games they coded and they don’t give precise details about their coding procedure.


I also noticed the researcher in question, Dacher Keltner, apparently had something to do with the Oscar-winning film “Inside Out” (!!!).


  1. zbicyclist says:

    A “handy” way to improve team performance :)

  2. Terry says:

    They control for some obvious confounds, but one at a time, and not all at once, and then pile on one significant p-value after another, in an accumulating evidence sort of way, even though each test has an obvious confound.

    Nifty little significance generator they’ve got there. Sounds like it could turn almost any data set with some healthy correlations into a “publishable” paper.

    What do you think the odds are that the authors know this?

  3. Eric Tassone says:

    Deadspin released a video on 23 October 2017 about Dr. Keltner’s more recent consulting work in this domain with the NBA champions, the Golden State Warriors:

    • Eric

      whao. Well I love the Warriors, so if he’s consulting for them and is on panels with the Dalai Lama, then whatever works.

      The interviewer in the video looked dubious the entire time. Her facial expressions were hilarious. I like how it ended: “You know this is sounding less like bulls**t.”

      My prior is that touch is important for team chemistry, but this evidence didn’t move it. Then there was the way the abstract ended:

      Consistent with hypotheses, early season touch predicted greater performance for individuals as well as teams later in the season. Additional analyses confirmed that touch predicted improved performance even after accounting for player status, preseason expectations, and early season performance. Moreover, coded cooperative behaviors between teammates explained the association between touch and team performance. Discussion focused on the contributions touch makes to cooperative groups and the potential implications for
      other group settings.

      This is too much.

Leave a Reply