Skip to content
Archive of posts filed under the Miscellaneous Statistics category.

Kaiser’s beef

The Numbersense guy writes in: Have you seen this? It has one of your pet peeves… let’s draw some data-driven line in the categorical variable and show significance. To make it worse, he adds a final paragraph saying essentially this is just a silly exercise that I hastily put together and don’t take it seriously! […]

Can talk therapy halve the rate of cancer recurrence? How to think about the statistical significance of this finding? Is it just another example of the garden of forking paths?

James Coyne (who we last encountered in the sad story of Ellen Langer) writes: I’m writing to you now about another matter about which I hope you will offer an opinion. Here is a critique of a study, as well as the original study that claimed to find an effect of group psychotherapy on time […]

The connection between varying treatment effects and the well-known optimism of published research findings

Jacob Hartog writes: I thought this article [by Hunt Allcott and Sendhil Mullainathan], although already a couple of years old, fits very well into the themes of your blog—in particular the idea that the “true” treatment effect is likely to vary a lot depending on all kinds of factors that we can and cannot observe, […]

My talk at MIT this Thursday

When I was a student at MIT, there was no statistics department. I took a statistics course from Stephan Morgenthaler and liked it. (I’d already taken probability and stochastic processes back at the University of Maryland; my instructor in the latter class was Prof. Grace Yang, who was super-nice. I couldn’t follow half of what […]

There’s something about humans

An interesting point came up recently. In the abstract to my psychology talk, I’d raised the question: If we can’t trust p-values, does experimental science involving human variation just have to start over? In the comments, Rahul wrote: Isn’t the qualifier about human variation redundant? If we cannot trust p-values we cannot trust p-values. My […]

There’s No Such Thing As Unbiased Estimation. And It’s a Good Thing, Too.

Following our recent post on econometricians’ traditional privileging of unbiased estimates, there were a bunch of comments echoing the challenge of teaching this topic, as students as well as practitioners often seem to want the comfort of an absolute standard such as best linear unbiased estimate or whatever. Commenters also discussed the tradeoff between bias […]

“The general problem I have with noninformatively-derived Bayesian probabilities is that they tend to be too strong.”

We interrupt our usual programming of mockery of buffoons to discuss a bit of statistical theory . . . Continuing from yesterday‘s quotation of my 2012 article in Epidemiology: Like many Bayesians, I have often represented classical confidence intervals as posterior probability intervals and interpreted one-sided p-values as the posterior probability of a positive effect. […]

Good, mediocre, and bad p-values

From my 2012 article in Epidemiology: In theory the p-value is a continuous measure of evidence, but in practice it is typically trichotomized approximately into strong evidence, weak evidence, and no evidence (these can also be labeled highly significant, marginally significant, and not statistically significant at conventional levels), with cutoffs roughly at p=0.01 and 0.10. […]

Carl Morris: Man Out of Time [reflections on empirical Bayes]

I wrote the following for the occasion of his recent retirement party but I thought these thoughts might of general interest: When Carl Morris came to our department in 1989, I and my fellow students were so excited. We all took his class. The funny thing is, though, the late 1980s might well have been […]

What’s the most important thing in statistics that’s not in the textbooks?

As I wrote a couple years ago: Statistics does not require randomness. The three essential elements of statistics are measurement, comparison, and variation. Randomness is one way to supply variation, and it’s one way to model variation, but it’s not necessary. Nor is it necessary to have “true” randomness (of the dice-throwing or urn-sampling variety) […]