Skip to content
 

Handy statistical lexicon

These are all important methods and concepts related to statistics that are not as well known as they should be. I hope that by giving them names, we will make the ideas more accessible to people:

Mister P: Multilevel regression and poststratification.

The Secret Weapon: Fitting a statistical model repeatedly on several different datasets and then displaying all these estimates together.

The Superplot: Line plot of estimates in an interaction, with circles showing group sizes and a line showing the regression of the aggregate averages.

The Folk Theorem: When you have computational problems, often there’s a problem with your model.

The Pinch-Hitter Syndrome: People whose job it is to do just one thing are not always so good at that one thing.

Weakly Informative Priors: What you should be doing when you think you want to use noninformative priors.

P-values and U-values: They’re different.

Conservatism: In statistics, the desire to use methods that have been used before.

WWJD: What I think of when I’m stuck on an applied statistics problem.

Theoretical and Applied Statisticians, how to tell them apart: A theoretical statistician calls the data x, an applied statistician says y.

The Fallacy of the One-Sided Bet: Pascal’s wager, lottery tickets, and the rest.

Alabama First: Howard Wainer’s term for the common error of plotting in alphabetical order rather than based on some more informative variable.

The USA Today Fallacy: Counting all states (or countries) equally, forgetting that many more people live in larger jurisdictions, and so you’re ignoring millions and millions of Californians if you give their state the same space you give Montana and Delaware.

Second-Order Availability Bias: Generalizing from correlations you see in your personal experience to correlations in the population.

The “All Else Equal” Fallacy: Assuming that everything else is held constant, even when it’s not gonna be.

The Self-Cleaning Oven: A good package should contain the means of its own testing.

The Taxonomy of Confusion: What to do when you’re stuck.

The Blessing of Dimensionality: It’s good to have more data, even if you label this additional information as “dimensions” rather than “data points.”

Scaffolding: Understanding your model by comparing it to related models.

Ockhamite Tendencies: The irritating habit of trying to get other people to use oversimplified models.

Bayesian: A statistician who uses Bayesian inference for all problems even when it is inappropriate. I am a Bayesian statistician myself.

Multiple Comparisons: Generally not an issue if you’re doing things right but can be a big problem if you sloppily model hierarchical structures non-hierarchically.

Taking a model too seriously: Really just another way of not taking it seriously at all.

God is in every leaf of every tree: No problem is too small or too trivial if we really do something about it.

As they say in the stagecoach business: Remove the padding from the seats and you get a bumpy ride.

Story Time: When the numbers are put to bed, the stories come out.

The Foxhole Fallacy: There are no X’s in foxholes (where X = people who disagree with me on some issue of faith).

The Pinocchio Principle: A model that is created solely for computational reasons can take on a life of its own.

The Statistical Significance Filter: If an estimate is statistically significant, it’s probably an overestimate.

Arrow’s Other Theorem (weak form): Any result can be published no more than five times.

Arrow’s Other Theorem (strong form): Any result will be published five times.

The Ramanujan Principle: Tables are read as crude graphs.

The Paradox of Philosophizing: If philosophy is outlawed, only outlaws will do philosophy.

Defaults: What statistics is the science of.

The Methodological Attribution Problem: The many useful contributions of a good statistical consultant, or collaborator, will often be overly attributed to the statistician’s methods or philosophy.

The Chris Rock Effect: Some graphs give the pleasant feature of visualizing things we already knew, shown so well that we get a shock of recognition, the joy of relearning what we already know, but seeing it in a new way that makes us think more deeply about all sorts of related topics.

The Freshman Principle: Just because a freshman might raise a question, that does not make the issue irrelevant.

The Garden of Forking Paths: Multiple comparisons can be a problem, even when there is no “fishing expedition” or “p-hacking” and the research hypothesis was posited ahead of time.

The One-Way Street Fallacy: Considering only one possibility of a change that can go in either direction.

The Pluralist’s Dilemma: how to recognize that my philosophy is just one among many, that my own embrace of this philosophy is contingent on many things beyond my control, while still expressing the reasons why I believe this philosophy to be preferable to the alternatives (at least for the problems I work on).

I know there are a bunch I’m forgetting; can youall refresh my memory, please? Thanks.

P.S. No, I don’t think I can ever match Stephen Senn in the definitions game.

7 Comments

  1. marcel says:

    In WWJD, you say, "My quick answer is, Yeah, I think it would be excellent for an econometrics class if the students have applied interests. Probably I'd just go through chapter 10 (regression, logistic regression, glm, causal inference), with the later parts being optimal."

    So just skip the earlier parts?

  2. Andrew Gelman says:

    Marcel: When I say "through chapter 10," I mean, "from chapters 1 through 10." And in the last sentence above, I meant "optional," not "optimal." I'll fix that.

  3. jonathan says:

    Mister P, huh? Isn't that reflective of the old male dominant paradigm?

  4. Ken Williams says:

    I'm not grokking what "WWJD" stands for. "What Would Jennifer Do"?

  5. […] There’s something that fascinates me about this aggressive anti-Bayesians: it’s not enough for them to simply restrict their own practice to non-Bayesian methods; they have to go the next step and put down Bayesian methods that they don’t even understand. This topic comes up from time to time on this blog, for example in discussing the uninformed rants of David Hendry (“I don’t know why he did this, but maybe it’s part of some fraternity initiation thing, like TP-ing the dean’s house on Halloween”), John DiNardo (“if philosophy is outlawed, only outlaws will do philosophy”), and various others (the Foxhole Fallacy). […]

  6. […] data, which is the #1 goal of an infographic—if it does work, it’s doing so using the Chris Rock effect, in which we enjoy the shock of recognition of a familiar idea presented in an unfamiliar […]

Leave a Reply