John Cook writes:
When I hear someone say “personalized medicine” I want to ask “as opposed to what?”
All medicine is personalized. If you are in an emergency room with a broken leg and the person next to you is lapsing into a diabetic coma, the two of you will be treated differently.
The aim of personalized medicine is to increase the degree of personalization, not to introduce personalization. . . .
This to me is a statistical way of thinking, to change an “Is it or isn’t it?” question into a “How much?” question. This distinction arises in many settings but particularly in discussions of causal inference, for example here and here, where I use the “statistical thinking” approach of imagining everything as being on some continuous scale, in contrast to computer scientist Elias Bareinboim and psychology researcher Steven Sloman, both of whom prefer what might be called the “civilian” or “common sense” idea that effects are either real or not, or that certain data can or can’t be combined, etc.
My preference for continuous models is closely connected to the partial pooling of Bayesian inference, but I don’t think I have this attitude because I’m a Bayesian. [Hey, look, when you're not paying attention, you slip into non-statistical discrete thinking! -- ed. Yup, guilty as charged. -- AG.] To put this more formally, I think that my training and experience with Bayesian methods has reinforced my preference for continuity, but in turn my taste in modeling has affected what methods I use. [There you go again! -- ed. I know, I know, I can't help it, thinking like a human. -- AG.] After all, there are lots of discrete Bayesian models out there (Bayes factors, etc.) and I don’t like them—in fact, I have a problem with their discreteness.
Another example is the use of a numerical measure rather than a yes/no summary (for example, a depression inventory scale rather than a cutoff yielding the distinction “is or is not depressed”).
Or consider decision making. Lots of theory and evidence, from Paul Meehl onward, suggests that people tend to think lexicographically (making decisions by first considering factor A, then using factor B to break the tie if necessary, then using factor C to break the tie after that, and so on) rather than continuously (for example, constructing a numerical weighted average of A, B, C, etc.). Sure, lexicographic rules are clean and easy to understand, and there are settings where they can approximate or even outperform a weighted average, but we can also get the reverse, a lexicographic rule that’s a complicated mess (see, for example, Figure 6 of this article).
And yet another example is our acceptance of uncertainty. One of the big themes of statistics is that we should be more comfortable admitting what we don’t know, and one of the big problems with many statistical methods as they are applied in practice is that are taken as a way of denying uncertainty. For example, you conduct an experiment, analyze your data, and conclude the results are statistically significant, or not. The implied (and often explicitly stated) conclusion is that the effect is real, or it is not.
To put it another way, I have two problems with the formulation of statistical tests and conclusions as “true positive,” “true negative,” “false positive,” “false negative.” Here are my problems:
1. “true”/”false”: In almost all cases of interest, I don’t think the underlying claim is true or false (at least, not in a way that can be directly mapped into a particular statistical model of zero effect, as is generally done.)
2. “positive”/”negative”: To me, this one’s the biggie. Even if you were to accept the idea that the null hypothesis might be true, I don’t think it’s a good idea to summarize scientific conclusions in this yes/no, significant/not-significant way. Sure, sometimes you really have to make a decision (apply policy A or policy B), but that’s a decision problem. At the inferential stage, I’d prefer to acknowledge my uncertainty.
P.S. See also p.76 of this article.