From my 2012 article in Epidemiology: In theory the p-value is a continuous measure of evidence, but in practice it is typically trichotomized approximately into strong evidence, weak evidence, and no evidence (these can also be labeled highly significant, marginally significant, and not statistically significant at conventional levels), with cutoffs roughly at p=0.01 and 0.10. […]

**Miscellaneous Statistics**category.

## Carl Morris: Man Out of Time [reflections on empirical Bayes]

I wrote the following for the occasion of his recent retirement party but I thought these thoughts might of general interest: When Carl Morris came to our department in 1989, I and my fellow students were so excited. We all took his class. The funny thing is, though, the late 1980s might well have been […]

## What’s the most important thing in statistics that’s not in the textbooks?

As I wrote a couple years ago: Statistics does not require randomness. The three essential elements of statistics are measurement, comparison, and variation. Randomness is one way to supply variation, and it’s one way to model variation, but it’s not necessary. Nor is it necessary to have “true” randomness (of the dice-throwing or urn-sampling variety) […]

## Statistical analysis on a dataset that consists of a population

This is an oldie but a goodie. Donna Towns writes: I am wondering if you could help me solve an ongoing debate? My colleagues and I are discussing (disagreeing) on the ability of a researcher to analyze information on a population. My colleagues are sure that a researcher is unable to perform statistical analysis on […]

## Statistical significance, practical significance, and interactions

I’ve said it before and I’ll say it again: interaction is one of the key underrated topics in statistics. I thought about this today (OK, a couple months ago, what with our delay) when reading a post by Dan Kopf on the exaggeration of small truths. Or, to put it another way, statistically significant but […]

## Go to PredictWise for forecast probabilities of events in the news

I like it. Clear, transparent, no mumbo jumbo about their secret sauce. But . . . what’s with the hyper-precision: C’mon. “27.4%”? Who are you kidding?? (See here for explication of this point.)

## Perhaps the most contextless email I’ve ever received

Date: February 3, 2015 at 12:55:59 PM EST Subject: Sample Stats Question From: ** Hello, I hope all is well and trust that you are having a great day so far. I hate to bother you but I have a stats question that I need help with: How can you tell which group has the […]

## A message I just sent to my class

I wanted to add some context to what we talked about in class today. Part of the message I was sending was that there are some stupid things that get published and you should be careful about that: don’t necessarily believe something, just cos it’s statistically significant and published in a top journal. And, sure, […]

## “For better or for worse, academics are fascinated by academic rankings . . .”

I was asked to comment on a forthcoming article, “Statistical Modeling of Citation Exchange Among Statistics Journals,” by Christiano Varin, Manuela Cattelan and David Firth. Here’s what I wrote: For better or for worse, academics are fascinated by academic rankings, perhaps because most of us reached our present positions through a series of tournaments, starting […]

## But when you call me Bayesian, I know I’m not the only one

Textbooks on statistics emphasize care and precision, via concepts such as reliability and validity in measurement, random sampling and treatment assignment in data collection, and causal identification and bias in estimation. But how do researchers decide what to believe and what to trust when choosing which statistical methods to use? How do they decide the […]