Skip to content
 

Articles on the philosophy of Bayesian statistics by Cox, Mayo, Senn, and others!

Deborah Mayo, Aris Spanos, and Kent Staley edited a special issue on the philosophy of Bayesian statistics for the journal Rationality, Markets and Morals.

Here are the contents:

David Cox and Deborah G. Mayo, “Statistical Scientist Meets a Philosopher of Science: A Conversation”

Deborah G. Mayo, “Statistical Science and Philosophy of Science: Where Do/Should They Meet in 2011 (and Beyond)?”

Stephen Senn, “You May Believe You Are a Bayesian But You Are Probably Wrong”

Andrew Gelman, “Induction and Deduction in Bayesian Data Analysis

Jan Sprenger, “The Renegade Subjectivist: Jose Bernardo’s Objective Bayesianism”

Aris Spanos. “Foundational Issues in Statistical Modeling: Statistical Model Specification and Validation”

David F. Hendry, “Empirical Economic Model Discovery and Theory Evaluation”

Larry Wasserman, “Low Assumptions, High Dimensions”

For some reason, not all the articles are yet online, but it says they’re coming soon. In the meantime, you can check out what Senn and I have to say.

Once all the articles are up, I’ll read them and write something in response.

10 Comments

  1. Joseph says:

    The road to happiness in statistics seems to be the following: first, ignore every philosophy paper that doesn’t advance technique. Second, ignore every techniques paper that doesn’t advance philosophy.

  2. K? O'Rourke says:

    On the other hand, more philosophy to try to understand
    (scientific method is supposed to help us stop ignoring things that don’t quite make sense)

    Had a glance at Senn’s and though I’ll more fully later, some of his speculation about Fisher being inspired by evolution likely is via Peirce (Yes , going on about him again)

    After trying to find out from Fisher’s son if his dad had anything by Peirce in his library, and getting no where, I discussed with a statistical historian who had worked with Fisher.

    What he said was that it would be unbelievable that Fisher had not read Peirce as any student studying at Cambridge at the same time as Fisher would have few other authors to read (and Peirce had heated exchanges with Bertrand Russel and Karl Pearson). But there won’t be any record to prove it.

    • Nick Cox says:

      Keith would like to believe that Fisher was influenced by C.S. Peirce. On the evidence of the indexes, Fisher does not quote C.S. in his three main statistical books or in any of the papers in the five volumes of Collected Papers. In his statistical correspondence he does refer once to B. Peirce on outliers, which is consistent I suppose with a clear awareness that Peirce father and Peirce son were different persons with different ideas.

      The scarcity of reading matter in Cambridge circa 1910 has not I believe been remarked hitherto in the history of ideas.

      Nothing above rules out that Fisher read C.S. Peirce or quoted him in other books or in uncollected papers, but the influence seems to be elusive.

  3. MAYO says:

    To our great perplexity, the journal RMM failed to publish the opening paper and Cox-Mayo exchange. Gelman must have some special pull to have gotten his out before others (Senn too). I may post the Cox-Mayo dialogue (a live “interview” actually) on my blog if it doesn’t come around soon.

  4. […] Links StatisticsAndrew Gelmans's article about Bayesian Statistics references […]

  5. […] d’autres manières de raisonner sur des modèles quantitatifs en sciences sociales, voir ici et là. L’actualité scientifique sur ce point est très dense. Pour moi, tout a commencé […]

  6. Simon says:

    Ok, time to show what little I know and ask a dumb question.

    I’ve read a little philosophy (esp. epistemology) but not yet Popper (time is limited, reading list is long!).
    When I read Andrew’s article I found I agreed with practically all of it, and am a big fan of the posterior predictive p-value idea.
    But, I couldn’t get past the use of ‘deduction’ and ‘induction’.

    By my understanding of these terms isn’t all statistical inference from finite data to some general conclusion inductive inference? My Oxford Companion to Philosophy says this about induction: “Inductive inferences are those that project beyond the known data…” and “Induction has traditionally been defined as the inference from particular to general.” By my reckoning this what we all do when we say anything about a model (general) based on data, whether frequentist or Bayesian. We just use different approaches. (A Bayesian “fundamentalist” might argue induction is built into the scheme, a frequentist might be happy to add-on induction at the end of the analysis by interpreting the p-value other diagnostic and using it to draw conclusions.)

    If I swap ‘deduction’ for ‘uses only direct probabilities’ and induction for ‘uses indirect probability too’ then it makes sense to me. But I don’t see how statistical inference (Bayesian or not) is not inductive at some point. What have I missed?

    • Andrew says:

      Simon:

      You might try reading my article with Shalizi which goes deeper into the philosophy jargon.

      • Simon says:

        In fact I did read that article a few weeks back and noticed the same thing.

        I guess maybe what I mean by induction is sufficiently general to include just about all cases of inferring general conclusions (whether assessing models or estimating parameters), regardless of how it’s actually done. I think this does at least match the definitions I usually find in philosophy books. In this case I don’t see how e.g. (provisionally) rejecting a model based on a p-value in a Fisher significance test is not induction, and made more so if using some imagination and insight you can construct a plausible model that does fit (and test against future data).

        Some make the case that Bayesian reasoning is the ‘correct’ (TBD!) calculus of inductive reasoning. Others might be happy using other procedures (you gave some examples in your article with Shalizi). In the broad sense these are inductive. Whether or not they are useful or successful is a different issue.

        Maybe statisticians use ‘inductive’ to mean a ‘specific formal system for induction’ rather than the more general definition I gave above and which only aims to describe the problem, not prescribe the solution. Or maybe I’ve interpreted ‘induction’ too generally.