Skip to content
Archive of posts filed under the Miscellaneous Statistics category.

Hot hand explanation again

I guess people really do read the Wall Street Journal . . . Edward Adelman sent me the above clipping and calculation and writes: What am I missing? I do not see the 60%. And Richard Rasiej sends me a longer note making the same point: So here I am, teaching another statistics class, this […]

How to use lasso etc. in political science?

Tom Swartz writes: I am a graduate student at Oxford with a background in economics and on the side am teaching myself more statistics and machine learning. I’ve been following your blog for some time and recently came across this post on lasso. In particular, the more I read about the machine learning community, the […]

Low-power pose

“The samples were collected in privacy, using passive drool procedures, and frozen immediately.” Anna Dreber sends along a paper, “Assessing the Robustness of Power Posing: No Effect on Hormones and Risk Tolerance in a Large Sample of Men and Women,” which she published in Psychological Science with coauthors Eva Ranehill, Magnus Johannesson, Susanne Leiberg, Sunhae […]

What was the worst statistical communication experience you’ve ever had?

In one of the jitts for our statistical communication class we asked, “What was the worst statistical communication experience you’ve ever had?” And here were the responses (which I’m sharing with permission from the students): Not sure if this counts, but I used to work with a public health researcher who published a journal article […]

“I do not agree with the view that being convinced an effect is real relieves a researcher from statistically testing it.”

Florian Wickelmaier writes: I’m writing to tell you about my experiences with another instance of “the difference between significant and not significant.” In a lab course, I came across a paper by Costa et al. [Cognition 130 (2) (2014) 236-254 ( In several experiments, they compare the effects in two two-by-two tables by comparing the […]

“The frequentist case against the significance test”

Richard Morey writes: I suspect that like me, many people didn’t get a whole lot of detail about Neyman’s objections to the significance test in their statistical education besides “Neyman thought power is important”. Given the recent debate about significance testing, I have gone back to Neyman’s papers and tried to summarize, for the modern […]


Leonid Schneider writes: I am cell biologist turned science journalist after 13 years in academia. Despite my many years experience as scientist, I shamefully admit to be largely incompetent in statistics. My request to you is as follows: A soon to be published psychology study set on to reproduce 100 randomly picked earlier studies and […]


Reflecting on the recent psychology replication study (see also here), journalist Megan McArdle writes an excellent column on why we fall for bogus research: The problem is not individual research papers, or even the field of psychology. It’s the way that academic culture filters papers, and the way that the larger society gets their results. […]

P-values and statistical practice

What is a p-value in practice? The p-value is a measure of discrepancy of the fit of a model or “null hypothesis” H to data y. In theory the p-value is a continuous measure of evidence, but in practice it is typically trichotomized approximately into strong evidence, weak evidence, and no evidence (these can also […]

To understand the replication crisis, imagine a world in which everything was published.

John Snow points me to this post by psychology researcher Lisa Feldman Barrett who reacted to the recent news on the non-replication of many psychology studies with a contrarian, upbeat take, entitled “Psychology Is Not in Crisis.” Here’s Barrett: An initiative called the Reproducibility Project at the University of Virginia recently reran 100 psychology experiments […]

Uri Simonsohn warns us not to be falsely reassured

I agree with Uri Simonsohn that you don’t learn much by looking at the distribution of all the p-values that have appeared in some literature. Uri explains: Most p-values reported in most papers are irrelevant for the strategic behavior of interest. Covariates, manipulation checks, main effects in studies testing interactions, etc. Including them we underestimate […]

My 2 classes this fall

Stat 6103, Bayesian Data Analysis Modern Bayesian methods offer an amazing toolbox for solving science and engineering problems. We will go through the book Bayesian Data Analysis and do applied statistical modeling using Stan, using R (or Python or Julia if you prefer) to preprocess the data and postprocess the analysis. We will also discuss […]

Neither time nor stomach

Mark Palko writes: Thought you might be interested in an EngageNY lesson plan for statistics. So far no (-2)x(-2) = -4 (based on a quick read), but still kind of weak. It bothers me that they keep talking about randomization but only for order of test; they assigned treatment A to the first ten of […]

Dan Kahan doesn’t trust the Turk

Dan Kahan writes: I [Kahan] think serious journals should adopt policies announcing that they won’t accept studies that use M Turk samples for types of studies they are not suited for. . . . Here is my proposal: Pending a journal’s adoption of a uniform policy on M Turk samples, the journal should should oblige […]

If you leave your datasets sitting out on the counter, they get moldy

I received the following in the email: I had a look at the dataset on speed dating you put online, and I found some big inconsistencies. Since a lot of people are using it, I hope this can help to fix them (or hopefully I did a mistake in interpreting the dataset). Here are the […]

“We can keep debating this after 11 years, but I’m sure we all have much more pressing things to do (grants? papers? family time? attacking 11-year-old papers by former classmates? guitar practice?)”

Someone pointed me to this discussion by Lior Pachter of a controversial claim in biology. The statistics The statistical content has to do with a biology paper by M. Kellis, B. W. Birren, and E.S. Lander from 2004 that contains the following passage: Strikingly, 95% of cases of accelerated evolution involve only one member of […]

Ira Glass asks. We answer.

The celebrated radio quiz show star says: There’s this study done by the Pew Research Center and Smithsonian Magazine . . . they called up one thousand and one Americans. I do not understand why it is a thousand and one rather than just a thousand. Maybe a thousand and one just seemed sexier or […]

Measurement is part of design

The other day, in the context of a discussion of an article from 1972, I remarked that the great statistician William Cochran, when writing on observational studies, wrote almost nothing about causality, nor did he mention selection or meta-analysis. It was interesting that these topics, which are central to any modern discussion of observational studies, […]

Survey weighting and regression modeling

Yphtach Lelkes points us to a recent article on survey weighting by three economists, Gary Solon, Steven Haider, and Jeffrey Wooldridge, who write: We start by distinguishing two purposes of estimation: to estimate population descriptive statistics and to estimate causal effects. In the former type of research, weighting is called for when it is needed […]

Don’t do the Wilcoxon

The Wilcoxon test is a nonparametric rank-based test for comparing two groups. It’s a cool idea because, if data are continuous and there is no possibility of a tie, the reference distribution depends only on the sample size. There are no nuisance parameters, and the distribution can be tabulated. From a Bayesian point of view, […]