Larry Wasserman’s (positive) review of “The Search for Certainty” by Krzysztof Burdzy

Larry sent me this review of a book on the philosophy of statistics that Christian and I reviewed recently, which I’ll paste in below. Then I’ll offer a few comments of my own.

Larry writes:

After reading the reviews of Kris Burzdy’s book “The Search for
Certainty” that appeared on the blogs of Andrew Gelman and Christian
Robert, I was tempted to dismiss the book without reading it. However,
curiosity got the best of me and I ordered the book and read it. I am
glad I did. I think this is an interesting and important book.

Both Gelman and Robert were disappointed that Burzdy’s criticism of
philosophical work on the foundations of probability did not seem to
have any bearing on their work as statisticians. But that was
precisely the author’s point. In practice, statisticians completely
ignore (or misrepresent) the philosophical foundations espoused by de
Finetti (subjectivism) and von Mises (frequentism). This is itself a
damning criticism of the supposed foundational edifice of statistics.
Burdzy makes a convincing case that the philosophy of probability is a
complete failure.

He criticizes von Mises because his theory, based on defining limits
of sequences (or collectives) does not assign a probability to a given
event. (There are also technical issues with the mathematical
definition of a collective that von Mises was unable to resolve but
these can be fixed rigorously using modern computational complexity
theory. But that doesn’t blunt the force of Burzdy’s main criticism.)

His criticism of de Finetti is more thorough. There is the usual
criticism, namely, that subjective probability is unscientific as it
is not falsifiable. Moreover, there is no guidance on how to actually
set probabilities. Nor is there anything in de Finetti to suggest that
probabilities should be based on informed prior opinion, as many
Bayesians would argue. More surprising is Burdzy’s claim that
subjective probability has the same problem as von Mises’ frequency
theory: it does not provide probability for an individual event. This
claim will raise the hackles of die-hard Bayesians. But he is right:
de Finetti’s coherence argument requires that you bet on several
events. The rules of probability arise from the demand that you avoid
a sure losing bet (a Dutch book) on the collection of bets. The
argument does not work if we supply a probability only on a single
event. The criticisms of de Finetti’s subjectivism go beyond this and
I will not attempt to summarize them.

Burdzy provides his own foundation for probability. His idea is that
probability should be a science, not a philosophy, and that, as such,
it should be falsifiable. Allow me to make an analogy. Open any
elementary book on quantum mechanics and you will find a set of
axioms. These axioms can be used to make very specific predictions.
If the predictions are wrong, (and they never have been), then the
axioms would be rejected. But to use the axioms, one must inject some
specifics. In particular, one must supply the Hamiltonian for the
problem. If the resulting predictions fail to agree with reality, we
can reject that Hamiltonian.

To make probability scientific, Burzdy proposes laws that lead to
certain predictions that are vulnerable to falsification. More
importantly, the specific probability assignments we make are open to
being falsified. Before stating his laws, let me emphasize a
crucial aspect of Burzdy’s approach. Probability, he claims, is the
search for certainty; hence the title of the book. That might seem
counter to how we think of probability but I think his idea is
correct. In frequentist theory, we make deterministic predictions
about limits of sequences. In subjectivist theory, we make the
deterministic claim that if we assign probabilities consistent with
the rules of probability then we are certain to be immune to a Dutch
book. A philosophy of probability, according to Burdzy, is the search
for what claims we can make for certain.

Burdzy’s proposal is to have laws — not axioms — of probability.
Axioms, he points, merely encode fact we regard as uncontroversial.
Laws instead, are proposals for a scientific theory that are open to
falsification. Here are his five proposed laws (paraphrased):

(L1) Probabilities are numbers between 0 and 1.

(L2) If A and B are disjoint then P(A or B) = P(A) + P(B).

(L3) If A and B are physically independent then they are
mathematically independent meaning that P(A and B) = P(A)P(B).

(L4) If there exists a symmetry on the space of possible outcomes
which maps an event A onto an event B then P(A)=P(B).

(L5) P(A)=0 if and only if A cannot occur. P(A)=1 if and only if it must occur.

Some comments are in order. (L1) and (L2) are standard of course.
(L4) refers to ideas like independent and identically sequences, or
exchangeability. It is not an appeal to the principle of
indifference. Quite the opposite. Burdzy argues that introducing
symmetry requires information, not lack of information.

(L3) and (L4) are taught in every probability course as add-ons. But
in fact they are central to how we actually construct probabilities in
practice. The author asks: Why treat them as follow-up ideas? They
are so central to how we use probability that we should elevate them
to the status of fundamental laws.

(L5) is what makes the theory testable. Here is how it works. Based
on our probability assignments, we can construct events A that have
probability very close to 0 or 1. For example, A could be the event
that the proportion of heads in many tosses is within .00001 of 1/2.
If this doesn’t happen, then we have falsified the probability
assignment. Of course P(A) will rarely be exactly 0 or 1, rather, it
will be close to 0 or 1. But this is precisely what happens in all
sciences. We can test prediction of general relativity of quantum
mechanics to a level of essential certainty, but never exact
certainty. Thus Burdzy’s approach puts probability on a level the
same as other scientific theories.

To summarize, Burdzy’s approach is to treat probability as a
scientific theory. It has rules for making probability assignments
and the resulting probabilities can be falsified. Not only is this
simple, it is devoid of the murkiness of subjectivism and the weakness
of von Mises’ frequentism. And, perhaps most importantly, it reflects
how we use probability. It also happens to be easy to teach. My only
criticism is that I think the implications of (L1)-(L5) could be
fleshed out in more detail. It seems to me that they work well for
providing a foundation for testable frequency probability. That is,
it provides a convincing link between probability and frequency. But
that could reflect my own bias towards frequency probability. More
detail would have been nice.

My short summary of this book does not do justice to the author’s
arguments. In particular, there is much more to his critique of
subjective probability than I have presented in this review. The best
thing about this book is that it will offend and annoy both
frequentists and subjectivists. I implore my friends on both sides of
the philosophical divide to read the book with an open mind.

My reply:

1. Whatever von Mises’s merits (or lack thereof) in general, I can’t take him seriously as a philosopher of statistical practice (see pages 3-4 of this article).

2. As I wrote earlier, Burdzy’s comments about subjectivism may or may not be accurate, but they have nothing to do with the Bayesian data analysis that I do. In that sense, I don’t think that Larry’s comment about “both sides of the philosophical divide” is not particularly helpful. I see no reason to choose between two discredited philosophies, and in fact in chapter 1 of BDA we are very clear about the position we take, which indeed is completely consistent with Popper’s ideas of refutation and falsifiability.

As I wrote before, “My guess is that Burdzy would differ very little from Christian Robert or myself when it comes to statistical practice. . . . but I suppose that different styles of presentation will be effective with different audiences.” Larry’s review suggests that there are such audiences out there.

33 thoughts on “Larry Wasserman’s (positive) review of “The Search for Certainty” by Krzysztof Burdzy

  1. I am a political scientist, so I am not the expert here. But I thought that modern foundation of statistics laid down on the work of Kolmogorov. Thus, I'd expect to hear criticism of kolmogorov, not Von Mises…

    Are my expectations wrong? Shouldn't him criticise measure theory?

    Manoel Galdino

  2. It seems to me that Wasserman, and perhaps Burdzy, have made a fundamental mistake here. Consider the section:

    "We can test prediction of general relativity of quantum mechanics to a level of essential certainty, but never exact certainty. Thus Burdzy's approach puts probability on a level the same as other scientific theories. based upon being able to "predict" almost certain events."

    Let us assume that I believe with very high probability that all swans are white, and I collect a large sample of swans, all of whom are white, updating my probabilities appropriately. Now I collect some more white swan data. Whoops! Actually, many of them are black. I have now falsified probability theory, apparently, just as discovering that light did not bend around stars would have falsified relativity.

    It seems that somebody has taken the word "theory" in the phrase "probability theory" to mean (Webster's New World Dictionary 3rd Ed.) def. 4: "A formulation of apparent relationships or underlying principles of certain observed phenomena" instead of def. 3 "a systematic statement of principles involved /the theory of equations in mathematics."

    For some reason, this does not encourage me to read either Wasserman's review or Burdzy's book.

  3. It's funny (both amusing and peculiar) to see quantum mechanics used as an example in the context above. If I understand correctly, the argument is that since it's impossible to verify a probability prediction for a single event, statistics isn't a "science." But since quantum mechanics is entirely concerned with probability statements, it's impossible to verify a quantum mechanical prediction for a single event, too! Suppose I use quantum mechanics to calculate, say, the probability that at least one atom in a particular cluster of uranium atoms will decay within the next 10 minutes, and I calculate a probability of exactly 0.1. I set up radiation detectors, wait 10 minutes… and I do detect a decay. Was my calculation correct, or not? That particular prediction is no more falsifiable than any other probability statement about a single event.

  4. (Disclaimer: haven't read the book…)

    I find it ironic that the example he gives (proportion of heads in a number of fair tosses) is the most deeply damning example for any straightforward proposal that probability assertions are falsifiable.

    The probabilistic claim "T" that "p(heads) = 1/2, tosses are independent" is very special in that it, in itself, gives no grounds for preferring any one sequence of N predictions over another: HHHHHH…, HTHTHT…, etc: all have identical probability .5^N and indeed this equality-of-all-possibilities is the very content of "T". There is simply nothing inherent in theory "T" that could justify saying that HHHHHH… 'falsifies' T in some way that some other observed sequence HTHTHT… doesn't, because T gives no (and in fact, explicitly denies that it could give any) basis for differentiating them.

    You can construct more elaborate falsification theories, e.g. falsify "T + X" for some meta X (I've never seen this done interestingly myself), or a theory of relative falsification (if have some alternate hypothesis T' in mind, one can stumble towards an idea of 'falsifying' T-versus-T' though arguably this is funny way of talking about it.) But "T" is falsifiable, simpliciter? The fair coin case demolishes this hope utterly, IMO.

  5. > I am a political scientist, so I am not the
    > expert here. But I thought that modern
    > foundation of statistics laid down on the work
    > of Kolmogorov. Thus, I'd expect to hear
    > criticism of kolmogorov, not Von Mises…
    >
    > Are my expectations wrong? Shouldn't him
    > criticise measure theory?

    I believe you are wrong. So far as I know, Kolmogorov created some axioms for mathematicians to play with, and perhaps because in the discrete setting proportions would follow his axioms, he used the word 'probability'. But we aren't interested in some area of pure mathematics just because someone repurposed a word from everyday language, we are concerned with the real world.

    An anology, unfair in some ways but totally on target in others, is if someone thought we understood all there was to know about usenet (remember that) "groups", because there was a branch of mathematics called group theory.
    It's unfair because in this case the theory couldn't even be of use in practice; it's fair that the theoreticians can't just say that because they reuse a word they have some relevance, but would actually have to argue beyond that about why they mathematical theory said anything about reality. If Kolmokorov did with any nontrivial sophistication or usefulness, and I'd love to learn otherwise, it's news to me.

    – P.S. Either you restrict Kolmogorov probabilities to finite sets where it is trivial, or you do not where it is actually wrong as even as an abstract mathematical model for the any of the many proposed real philosophies/conceps of probability that could have relevance in the actual world.

  6. Phil

    No. Your second statement has it wrong.
    No one is saying that statistics isn't a science.
    The claim is the exact opposite.
    It is better to read his book rather than rely on my review.

    Best wishes
    Larry

  7. ajg: I think you've gone a bit farther than I would here… Kolmogorov's measure theory is a method for formalizing probability, a formal theory can't tell us anything from first principles about the real world, true, but it can tell us which mathematical deductions are meaningful for a given model. Then IF the model is a model of reality, then the deductions should be predictive…

    It's how we get the model in the first place that keeps the statisticians and scientists employed. If we could just play with the math we wouldn't need them, but we decidedly DO need them.

  8. Ajg: You're making a mistake here in ignoring the choice required in any model checking. You think there's no reason ahead of time to consider #heads (that is, unordered sequences) as a test summary, but there's also no reason ahead of time to consider ordered sequences as a test summary either. Perhaps I should write a longer entry on this–I've seen this mistake before–but in the meantime let me refer you to chapter 6 of BDA.

  9. Dear ajg,

    You say

    "(Disclaimer: haven't read the book…) I find it ironic that the example he gives (proportion of heads in a number of fair tosses) is the most deeply damning example …"

    I do not understand why people who did not read my book think that they can participate in this discussion. My book has a subsection called "Multiple
    predictions" starting on page 48. You might not like any of my arguments. But if you read my book then your blog entry would make sense – it would be an argument against my book. The argument that you have given is against what you think is in my book.

    Chris Burdzy

  10. Professor Burdzy,
    I am sorry I annoyed you so, but a large point in putting a disclaimer in my comment as the first sentence was to make the context very clear should someone want to read on (which is obviously that: my comment concerns content in the review, not the book), and furthermore to allow someone like youself — who think this is illegitimate — to read no further. For that matter, it provides the clearest guidance possible to our moderator, who actually determines who "can participate in this discussion."

    As to what I was thinking: simply that the lengthy, clear, review had enough self-contained ideas to be of interest in itself. I do regret that the use of "he" in my comment was unfortunately ambiguous.

    But if we could put this aside, can I ask you a related question about what you do actually say in your book. I would be very interested were a serious thinker such as yourself or Professor Gelman to suggest that an assertion "These observations X provide evidence against the hypothesis that this is a fair coin" [*,**] can be given any 'absolute' [***] meaning whatsoever.

    [*]: or any near equivalent one might wish, possibly involving the word 'falsification'

    [*]': fair coin is critical here, p(heads) = something-other-than-0.5 is a different and vastly less interesting case to me

    [**]: by which I mean, without having to contextualize further about the specific beliefs and desires of the person asking the question, or about which alternative hypothesis are of particular interest to that person.

    Professor Gelman seems to be saying I am making a mistake wondering this, so thanks to his suggestion I already have some reading material lined up!

    I have no idea whether you or your book would say we can make any useful sense of the claim above. But if you tell me here that your book does so, then I promise you've sold another copy.

  11. It's worth pointing out, since this will clear up some of the confusion, that Burdzy is a mathematician, who's research area is probability. So for example, he knows what the word "theory" in the subject name "probability theory" is.

    The discussion of the book has been oddly disappointing. I haven't read his book, so I don't know if his argument is convincing, but for some reason everyone in the comment threads has decided that he _must_ be wrong, without really engaging with his argument.

  12. Walt: If people want to read Burdzy's book, that's fine, but I don't think that should be a requirement to participate in the discussion. I read the book and I found it pretty remote from statistical practice. Also, as I noted in my earlier blog entry, I don't think any of the people involved in this discussion would've heard of Burdzy's book had Christian not somehow obtained a copy of it. We can feel free to discuss the larger issues of statistical philosophy without having read Burdzy's book (or, for that matter, my book or Berger's or Carlin and Louis).

  13. Dear ajg,

    You write

    "I have no idea whether you or your book would say we can make any useful sense of the claim above. But if you tell me here that your book does so, then I promise you've sold another copy."

    All I can say is that I tried to elaborate on that claim although I did not state it in your terms. Of course, I could ask you to read Popper because my own theory is just an elaboration of Popper's theory but I do not know of an accessible text on this issue by Popper (but my knowledge of philosophical literature is limited – someone else may be able to point out an accessible text by Popper).

    I suggest that you read the introduction to my book posted for free on publisher's Web site. This will give you a better idea of whether to read my book or not than anything that I can say here.

    Chris Burdzy

  14. ajg – to restate what Andrew said in different terms – if you want to check T, you shouldn't try to compare sequence of tosses, but number of Heads. This actually isn't a deep philosophical issue but a very pragmatic statistical issue – i.e. setting up the right test for your hypothesis:

    If you do that, the chance for getting 6 Heads is .5^6, i.e. ~.016, whereas the chance of getting 3 Heads (in any order) is
    n!/(k!(n-k)!)*.5^6, i.e 20 times as large or about .31
    So getting 6 Heads is actually quite unlikely if you have a fair coin – by most standards we would say it falsifies T.

    I would doubt that any statistician or mathematician would have much disagreement on this – although I'm sure there may be nuances between Bayesians and Frequentists in how they express the results. I would also say that if you struggle with this, Andrew's BDA book might not be the best place to start… it's not exactly a "low-tech" book.

  15. Andrew,

    You write "I read the book and I found it pretty remote from statistical practice." I think that you are mixing up two considerably different lines of criticism.

    If I had written a Chinese cookbook in Italian, for Italian audience, you could have remarked that there is no demand for Chinese cookbooks in Italy. This may be true or false but this claim makes sense. Does the statement "I read this Chinese cookbook and was disappointed because it contained no Italian recipes" make much sense?

    If you want to say that there is no demand among statisticians for a book that discusses in detail philosophical theories of von Mises and de Finetti (and related issues) then this claim makes sense (although I hope that it is false, not true). I do not see a point in criticizing my book for having little impact on statistical practice – it was never intended to have such an impact.

    Chris

  16. Thanks Sebastian but – and I know I'm about to put more weight on your words that you probably intended – I do think you are wrong to say this "the right test". It's _a_ test certainly, but I might choose others as well or instead (perhaps one that oriented more towards finding departures from independence than it is looking at proportions). Among all possible tests – and note that we can't apply them all – some will have HHHHHH as disconfirming and some will not.

    So "HHHHHH disconfirms T" is not a remotely self-contained statement let alone being true: it requires context about the tests that were run (and perhaps about why these ones were chosen). Whereas in a non-probabilitistic context, "Observation O falsifies T" can be a self-contained truth if you know that T predicts that O cannot be. I find the fair coin example interesting because (I now say very hesitantly…) it seems such an extreme case: any test of this hypothesis seems to be _all_ about context and the experimenter's choices and purposes, and nothing about the attitude of the hypothesis towards individual observations (and I really mean observations, not information-losing test statistics into whose selection we have already embedded context-specific choices.)

    Anyway, thanks again. I hope I get more out of BDA than you expect.

  17. Chris: In your book, you (mildly) criticized Bayesian Data Analysis. I think if you'd fully understood our Chapter 1, you wouldn't have made these criticisms. Applied statistics can be hard for people to understand, especially if their main experience is mathematical.

  18. ajg/axg – I would say that if you're interested in testing p=.5, the test I suggest is indeed _the_ right test (although I guess you might be able to test for other moments, not sure, but that would essentially be the same test). p is a statement about an expected frequency – not about an expected order of events – so you should look at a test statistic concerning frequency. Just like, if you're interested in measuring the gravity constant g, you shouldn't care about the mass of the falling object – sure it's additional information, definitely relevant in some other context – but not for the question you're trying to answer.

    If you want to test for independence _and_ p=.5 at the same time (and upon second reading that might be your T) you won't have clear results in many cases(e.g. HHHHHH could be the result of p=1 or of a completely dependent process were throw k fully determines throw k+1).
    That's the same for other sciences, too, though.
    If you measure the acceleration of a falling object you can _either_ see if g=9.81m/s^2 holds _or_ if the object is falling in vacuum, but not both at once using the same observation.

    Also, your insistence that testing a fair coin is in any relevant sense fundamentally different from testing a coin with, e.g. p=.4 suggests (with all due respect) a fundamental misunderstanding of statistics.

  19. Walt: I do not think either Andrew or I said that Chris Burdzy was "wrong". As Andrew puts it, we simply find his book remote from our statistical concerns. (Larry's review makes this clear in his introduction as well, even though he sees some appeal in the exercise.) Or, for myself, from the philosophical foundations of Statistics. As I tried to explain in several past comments, I have nothing against the book per se (and I certainly do not wish it burned at the stake!) and evidently not against Chris Burdzy: my original comments were posted as a result of having read the book and finding nothing there relevant for my statistical training (thinking?), while the book review was written a day later to make my arguments clearer (?) and take full advantage of the time spent on the book. My overly critical (venomous?!) tone came from the fact that I did not think the philosophical arguments in the book were particularly deep, but the on-going discussion building up on this entry and on earlier ones evidently shows that this is a matter for debate. Fine, this is the very point of philosophy!

  20. Footnote to the discussion for latecomers like me: Burdzy asked for references to Popper's discussion of the propensity interpretation of probability. The precursors are in Popper's The Logic of Scientific Discovery (German 1934, English translation 1959). Popper did not formulate the propensity view until 1953, but the 1959 translation contains frequent footnotes referring to that view and also referring to sections of a "Postscript" to be published subsequently. The Postscript, apparently written in 1956, circulated in proof form but was not published until 1983 as the 3-volume Postscript to the Logic of Scientific Discovery. The discussion of the propensity view is in the first volume, "Realism and the Aim of Science."

    I have not read Burdzy's book, but I did read the introduction available on his web site. Its tone in rejecting other theories is way too overstated for my tastes ("This book is about one of the greatest intellectual failures of the twentieth century," "one of the most confused theories in all of science and philosophy," "an embarrassment for the scientific community," "complete intellectual failures," etc.)

    That being said, I do think (I'm no expert) that Popper's propensity interpretation is the most viable bridge between what we really tend to mean by probability and the mathematical formalization. These interpretational issues won't alter anyone's statistical practice, but many of us might find exploration and elaboration of the issues rewarding. And, after all, von Mises's explorations of infinite collectives helped sharpen our notions of probability and laid the groundwork for Popper's further explorations.

  21. David: Sorry, in my short blog entry I did not make it clear that my reference to Popper's theory (in an answer to a blog entry by ajg) was about the "falsification" idea of Popper. I referenced "The Logic of Scientific Discovery" in my book. I consider Popper's idea that probability is an objective (scientific and physical) property of objects or experiments a separate (independent) philosophical position and I do not personally support it. My feeling is that the question of ajg was related to "falsification", not "propensity". Thank you for references to Popper's publications. But is any one of them "accessible" so that it is fair to ask ajg to read it?

    Concerning the aggressive language of my book, think about scholarly publications in the field of political science on one hand and direct political propaganda on the other. I believe that there is place and need for both. Direct propaganda does not have to mindless. My book is not a professional philosophical treatise. It is direct propaganda.

    Chris

  22. Professor Burdzy,
    I really appreciate your helpfulness in trying to have people give me pointers on Popper!

    I would like say some things that probably sounds insulting, but I hope would be ultimately helpful. Background: Whatever you or other readers my infer about my intellect from these comments, I am intensely interested in philosophies-of-probability and
    philosophies-of-statistics and I feel it likely true that I have read more and spent more time thinking on such matters (read != comprehend, of course!) than a very small number of people for whom neither probability nor statistics nor philosophy is at their (presumably) academic careers.

    So here goes:
    I did read your introduction chapter and found it, well, entirely uninteresting. I saw no hint of anything insightful or even provocative; it seemed very generic. I got no sense I would learn anything reading further that had not been said countless times before, or be challenged by anything new no matter how ridiculous. And your book is expensive (why, why, do people choose such publishers?) so no-sale. [Except, to be honest, for me there will someday be a sale, yours is the type of book that on principle I cannot resist buying and reading at some moment (of weakness) but at this moment I have so far resisted.]

    Anyway, my point is to be constructive: I recommend that refine the intro to be far sharper and clearer what your point is, what you will show, why it is new, why it is nonobvious. There are issues with von Mises? (laugh, laugh, laugh) and de Finetti (more hesitant single laugh): we are in the 21'st century! Say more! And condition all of these on "|the type of person who would pick up this book in the first place". You will have more impact this way. You may have a huge philosophical and practical contribution on your hands. But – to cite your own words from the previous commment – "propaganda"?! Your freely available intro chapter shows none such, but frankly more explicit, effective, propaganda would be a good idea here.

  23. You write "I read the book and I found it pretty remote from statistical practice." I think that you are mixing up two considerably different lines of criticism.

    If I had written a Chinese cookbook in Italian, for Italian audience, you could have remarked that there is no demand for Chinese cookbooks in Italy. This may be true or false but this claim makes sense. Does the statement "I read this Chinese cookbook and was disappointed because it contained no Italian recipes" make much sense?

    The sentence below, taken from the introduction to your book shows that the 'remoteness from statistical practice' is a valid and meaningful criticism, by the intentions you yourself declared for the book.

    "I do not see myself as a philosopher trying to uncover deep philosophical secrets of probability but as an anthropologist visiting a community of statisticians and reporting back home what statisticians do."

  24. Jack,

    You quote my sentence:

    "I do not see myself as a philosopher trying to uncover deep philosophical secrets of probability but as an anthropologist visiting a community of statisticians and reporting back home what statisticians do."

    Yes, you have a point. But I do not think that I claim anywhere in the book that is is my intention to reform statistics. I have no proposals for new statistical techniques. I claim that the theories of von Mises and de Finetti completely fail to describe what statisticians actually do. I think that this agrees, more or less, with the sentence that you cited, and with my earlier posts on this blog.

    Chris Burdzy

  25. Chris:

    I am in complete agreement with your claim that the theories of von Mises and de Finetti completely fail to describe what statisticians actually do. At least, these theories completely fail to describe what I do! That's one reason Christian and I didn't see your book as relevant to our work: you're criticizing something that we don't do and that we don't recommend in our own work. That's also why I thought Larry's comments were a little off-base.

    Also, I think that your (mild) criticisms of Bayesian Data Analysis were misplaced, but that represents only a very small part of your book. Overall I felt the book might be helpful in communicating some things to a mathematical audience but that it was far removed from my own concerns as an applied statistician.

  26. Andrew,

    I am delighted to see your statement

    "I am in complete agreement with your claim that the theories of von Mises and de Finetti completely fail to describe what statisticians actually do."

    This is not logically equivalent to the claim that I made in my book that the two theories are complete intellectual failures. But your statement is the most that I could have reasonably hoped to hear from any statistician, Bayesian or not.

    Concerning the question of "relevance", let me try to clarify a few things both for you and for other readers of this blog who do not know my background. My research is in the area of theoretical probability and mathematical analysis. It is not applied at all. But I do a lot of applied probability. Yes, I do! I teach undergraduate probability courses. Graduate probability textbooks (at least many of them) spend little or no time discussing practical interpretations of probability. At the other extreme, undergraduate textbooks are (typically) at least 50% applied. What this means depends on the author, but you will find examples and homework problems concerned with coins, dice, public opinion polls and traffic.

    So when I teach undergraduate probability and I present two popular "interpretations" of probability, frequency and subjective, what am I suppose to tell the students? It seems to me that you would not advise me to teach them the philosophical theories of von Mises and de Finetti. Should I tell them about the law of large numbers and the Bayes theorem? I have no problem with these theorems but all statisticians, frequentists and Bayesians, believe in all standard theorems (as far as I can tell). So telling the students only about the two theorems seems to be missing something. If all statisticians believe in both theorems, why are some of them fighting over foundational issues?

    Or suppose that a student tells me that she read that the probability of global warming is 85%. What does it mean, she asks? Should I tell her some version of the frequency interpretation and tell her that this is the one and only scientific interpretation of the statement? Or should I tell her that the claim about the probability of global warming is no more than a subjective opinion of a single scientist?

    I am not trying to start a discussion of any of the above philosophical questions in this blog. My point is that the questions that I discussed in my book are relevant to all people who teach undergraduate probability. Perhaps this does not include you. But I am sure that quite a number of statisticians teach undergraduate probability.

    Chris

  27. Chris: I recommend chapter 1 of Bayesian Data Analysis, which discusses the connections between long-run frequencies and the probabilities of individual events. As I've said a few thousand times on this blog and elsewhere, yes, subjective probability is Bayesian, but, no, Bayesian probability does not have to be subjective. Your frequentist/subjective (or Mises/Finetti) duality is missing a lot.

  28. Chris: to me (and some others) teaching undergraduate probability courses is _not_ applying probability.

    Working with others on current empirical problems with important unknown aspects utilizing probability in some useful way – would be applying probability.

    Most if not in all applied fields – law, engineering, medicine, sports, etc the philosophical basis is widely ignored by – and even annoying to – most active in the field.

    Need to keep in mind that no one has the correct answer and no one ever will. As Kadane once nicely put it – it is just rhetoric.

    K?
    p.s. the Ian Hacking undergrad course I sat in on at U of T in the 1990's did seem to work well – but he well understood and emphasized that there was not a correct answer and presented more than one wrong answer as being somewhat helpful

  29. Andrew,

    Once again, I have a feeling that one of us says "Two and two makes four" and the other one says "No, you are wrong. TWO AND TWO MAKES FOUR". You say "Bayesian probability does not have to be subjective". I could not agree more. With a dose of exaggeration, I could say that this is precisely what my book is about.

    I wonder why we seem to disagree if we seem to agree. My guess is that you think that certain ideas are well known and widely accepted while I think that they are not and that my book was needed to make them better known.

    The article in Wikipedia on "Bayesian probability" has the URL http://en.wikipedia.org/wiki/Subjective_probabili
    To be fair, the body of the article does distinguish between objective and subjective interpretations of the Bayesian method.

    Another article in Wikipedia says "Bayesian Analysis produces a probability-like number which measures the subjective degree of belief in a proposition (including conjunctions of propositions)."

    I doubt that Wikipedia is an accurate representation of philosophical views of professional statisticians. But people who write for Wikipedia must have learned their ideas somewhere and we have to blame the scientific community for the dissemination of such ideas because there is nobody else to blame. So there is work to be done to clarify the ideas in the minds of scientists, including statisticians.

    By the way, you suggest that my book is for mathematicians. Most mathematicians (probabilists) do not care about interpretations of probability because their field is not split along the philosophical lines the way that statistics is.

    Chris

  30. Keith,

    (i) Your remark that

    "Need to keep in mind that no one has the correct answer and no one ever will."

    is similar to your earlier remark in a different thread in this blog (you quoted George Box):

    "All models are false but some are useful."

    The two ideas are formally true but at the same time they are highly misleading. Astrology is false. Newton's physics is false, according
    to Einstein. Nevertheless, it is important that we do not give them identical labels of "false model". We should say that astrology is complete nonsense. We should say that Newton's physics is very accurate in most practical situations. Similarly, it is important that we do not give the same label of "false model" to all philosophical theories.

    (ii) You write "Most if not in all applied fields – law, engineering, medicine, sports, etc the philosophical basis is widely ignored by – and even annoying to – most active in the field." This is true, I am afraid. But if you ignore a hurricane that does not mean that the hurricane will ignore you. If (some) applied scientists use frequency-style hypothesis tests then they are affected by the foundational issues whether they ignore them or not.

    Chris

  31. Largely agree Chris but in addition to hurricanes, ther are snakes, crockadiles, deans and review committies, etc.

    So I am not sure of the optimal percentage of practictioners that should spend considerable time and effort on the philosophical basis – but its likely far less than 100% or even 10% …

    K?

Comments are closed.