OK, OK. If four people email me about something, I’ll blog on it.

Following up on our recent discussion of p-values, let me link to this recent news article by Tom Siegfried, who interviewed me a bit over half a year ago on the topic. Some of my suggestions may have made their way into his article.

The main reason why I’m linking to this is that four different people emailed me about it! When I get four emails on the same topic, I’ll blog it. (With one exception, of course: as you know, there’s one topic I’m never blogging on again.)

I agree with most of what Siegfried wrote. But to keep my correspondents happy, I’ll mention the few places where I’d amend his article:

1. Siegfried describes prior probability as “an informed guess about the expected probability of something in advance of the study.” He immediately qualifies this: “Often this prior probability is more than a mere guess — it could be based, for instance, on previous studies.” Still, I disagree with his first sentence. I agree that sometimes–often!–a prior distributions is not constructed using previous studies. But when it’s not, I’d call it a model or an assumption, not a guess.

Why does this matter? Mere semantics? Not quite. I put the prior distribution on the same philosophical dimension as the likelihood. I have no problem with you calling my prior distribution an “informed guess” if you’ll also describe your normal distribution or your logistic regression as “informed guesses.” My point: the prior distribution, and also the likelihood (in most cases) are assumptions, they’re mathematical models, not really “guesses” at the truth so much as useful approximations to the truth. Or, more to the point, approximations to the truth that give useful inferences for quantities of interest.

2. Siegfried writes: “Standard or ‘frequentist’ statistics treat probabilities as objective realities; Bayesians treat probabilities as ‘degrees of belief’ based in part on a personal assessment or subjective decision about what to include in the calculation.”

Ummm . . . I completely disagree with this. Bayesians (at least, followers of Bayesian Data Analysis) think of probabilities as objective realities too–or, at least as much as any other statisticians to. Some probabilities are more objective than others. The probability that the die sitting in front of me now will come up “6” if I roll it . . . that’s about 1/6. But not exactly, because it’s not a perfectly symmetric die. The probability that I’ll be stopped by exactly three traffic lights on the way to school tomorrow morning: that’s . . . well, I don’t know exactly, but it is what it is. Some probabilities are more objective and real than others, but I don’t see this as having anything to do with Bayes.

See chapter 1 of BDA or more on where probabilities come from. To put it another way, Bayesian statistics (as I practice it) is no more subjective than any other approach to statistics.

3. Siegfried quotes my former Berkeley colleague Juliet Shaffer as saying:

Replication is vital. . . . But in the social sciences and behavioral sciences, replication is not common. This is a sad situation.

Really? Maybe, maybe not. Replication is costly. Is it worth the effort? Depends on the setting. Shaffer, like I, has worked extensively in the social and behavioral sciences. How often has she followed her own advice and replicated somebody else’s study, or even her own? I haven’t done this very often myself.

It’s easy to say that something is “vital” and that other people should do it. It’s not so easy to devote the time and effort to doing it oneself. Which suggests that the benefits of said activity do not necessarily exceed its costs.

I agree with Siegfried’s larger point, though, which is that statistical methods can often be used to give a misleading sense of scientific-ness to messy collections of data. And he makes some other good points along the way, for example that estimates of average treatment effects–even if based on perfect randomized experiments–can obscure potentially important variation.

P.S. If there were a stat blogosphere like there’s an econ blogosphere, Siegfried’s article would’ve spurred a ping-ponging discussion, bouncing from blog to blog. Unfortunately, there’s just me out here and a few others (see blogroll), not enough of a critical mass to keep discussion in the air.

18 thoughts on “OK, OK. If four people email me about something, I’ll blog on it.

  1. On your amendments:

    1. Most people don't understand "models" or "assumptions," but do understand "informed guesses." I might agree with you on the semantics, but for the purposes of presentation I think using your terminology would require more background explanation.

    2. Siegfried's statement on degrees of belief is true in some circumstances, but does not have to be true (which is I guess part of the lesson in BDA?)

    From my point of view, I think we need to move away from parameters (such as the effect of a drug or a population) being unknown and unknowable constants and thus away from p-values which test whether these unknowable constants (which are usually really a probability distribution whether from a subjective or objective point of view) are different from 0. Perhaps the only exception to this is simulation.

  2. How about the example in Box 2 of the article:

    Consider this simplified example. Suppose a certain dog is known to bark constantly when hungry. But when well-fed, the dog barks less than 5 percent of the time. So if you assume for the null hypothesis that the dog is not hungry, the probability of observing the dog barking (given that hypothesis) is less than 5 percent. If you then actually do observe the dog barking, what is the likelihood that the null hypothesis is incorrect and the dog is in fact hungry?

    Answer: That probability cannot be computed with the information given. The dog barks 100 percent of the time when hungry, and less than 5 percent of the time when not hungry. To compute the likelihood of hunger, you need to know how often the dog is fed, information not provided by the mere observation of barking.

    Is he really saying that we can't compute the likelihood of hunger given the stated model and a single observation? Either I've no intuition for p-values or this is a poor example.

  3. As in toher comments of mylself, I am no expert here, but…

    When you read a book on bayesian inference (Kendall's for example) you do see people saying probability is subjective.

    For a more recent example, take a look at the interview with prof. Ron Howard (http://www.scienceofbetter.org/podcast/howard.html)

    See, for example, this quote from the cited interview:

    Int: By “correct interpretation,” do you mean subjective
    probability?
    RH: What do you mean by subjective probability?
    Int: A probability based on one’s own opinion.
    RH: Is there any other kind of probability? If you
    read Jaynes, you will never again entertain the notion
    of an objective probability. In other words, it is a mistake
    that is built into the words themselves. You cannot
    get probability from data.

  4. Jimmy: Dan's comments are reasonable.

    John: Fair enough.

    David: I'm not a big fan of these sort of pet examples. I prefer something more realistic, typically with continuous rather than discrete parameters.

    Manoel: That's what Ron Howard says, it's not what I say. I say that Bayesian probability is as objective as anything else that's out there.

  5. Models/ideas are only real because of this

    Replication is costly. Is it worth the effort?

    perhaps why you enjoyed re-reading that 1998 paper of your's with that figure on experts opinions.

    Those that don't replicate become extinct – and some subjects evolve faster than others ;-)

    K

  6. I think he might be saying that we can't compute P(H|B) based on P(B|H) without knowing P(H), (H:Hungry, B:Bark), i.e. we need the prior probability of the dog being hungry.

    With P(B=1|H=0) = 1/20 we have
    P(H=1|B=1) = 20*P(H=1) / ( 19*P(H=1) + 1 )

  7. I couldn't get past the first page the first time I read it! The language is so inflammatory, and the accusations so imprecise, it sounds very sloppy. Statistics is a "mutant form of math" and science was "seduced by statistics", blah blah.
    I will give it another try this weekend.

  8. I thought K?'s comment was spot on: "those that don't replicate become extinct". Unrolling that a bit, if it's important, it'll be replicated.

    But why pick on the social sciences when there are bigger fish to fry budget-wise and public interest-wise, such as biology and physics? Here's a link to the most widely read paper from PLOS:

    John P. A. Ioannidis. 2005. Why Most Published Research Findings Are False. PLoS Medicine.

    There was no irony or self-referentiality, unfortunately, but lots of discussion of bias in publishing and fishing for significance.

    Who's going to replicate the LHC experiments right away if they ever get the thing debugged? But I'm guessing high energy physicists might still be interested in the results.

    In biology, there are many notions of replication in play. For instance, there are technical replicates (reanalyzing the same biological sample), biological replicates (taking the same kind of sample from multiple individuals), equipment replicates (replicating experiments on different kinds of equipment), and then lab replicates (turns out the "lab hands" of the experimenter and how they use their tools makes a huge difference).

    Hierarchical models would be great if there actually were replicates.

  9. "Bayesian probability is as objective as anything else that's out there."

    This is a conditional assertion: if objective probabilities are out there, then Bayesian probabilities are (or can be) objective; if objective probability is an incoherent notion, then Bayesian probabilities aren't objective (but neither is any other probability people are actually using). But RH is responding directly to the notion of "objective probability", whereas AG is leaving it aside.

  10. Andrew, I agree with you when you say "Bayesian probability is as objective as anything else that's out there". But the issue of whether probability is inherently subjective is tricky — should we define it as long-range frequency, as a measure of uncertainty, or as degree of belief? I personally like de Finetti's betting interpretation ("probability does not exist"; see also the excellent book by Richard Jeffrey, `Subjective probability: The real thing'"). The interpretation that works for you does not need to work for other Bayesians. For some Bayesians, probability reflects degree of belief; for other Bayesians, it reflects uncertainty (I'm not sure whether any Bayesian is comfortable with the long-run frequency interpretation though).

  11. Hank:

    We were never able to get trackback working on the blog, but I do see the comments above and will post something in response to keep the discussion going.

    Corey:

    The terms "objective" and "subjective" aren't defined clearly enough for me want to make a claim that Bayesian statistics is or is not an objective scientific method. In general, there seem to me to be objective and subjective aspects to just about all scientific procedures. As I wrote, I think Bayesian probabilities are as objective as anything else out there. The extent to which a probability is "objective" or "subjective" depends, I think, more on the nature of the application than on the form of inference used. See my article on the boxer, the wrestler, and the coin flip. Or, for some more serious examples, see chapter 1 of BDA.

    EJ:

    I agree with you that "the interpretation that works for you does not need to work for other Bayesians."

    I'm fine with people using subjective Bayesian methods; my objection is to people who say that Bayesian methods are subjective, thus basically saying that I don't exist.

  12. Andrew: on WordPress blogs there is an automated "pingback" system. I believe that Moveable Type has implemented this pingback feature recently. Perhaps you want to look into that.

Comments are closed.