Skip to content
 

Last post on Popper (I hope)?

Jasjeet writes,

Hi Andrew,

I saw your recent exchange on falsification on your blog. I mostly agree with you, but I think the view of falsification presented is a little too simple—this is an issue with Popper’s stance. I say this even though I’m very sympathetic to Popper’s position. I suggest that you check out Quine’s “Two Dogmas of Empiricism”. Originally published in The Philosophical Review 60 (1951): 20-43. Reprinted in W.V.O. Quine, From a Logical Point of View (Harvard University Press, 1953). This is generally considered one of the canonical articles on the issue.

You may be interest to know that Kuhn tried to distance himself from the dominant reading of his work. The historian of science, Silvan Schweber, who knew Kuhn tells wonderfully funny stories about this at dinner parties. BTW, if you are interested in this stuff, you should check out Schweber’s _QED and the Men Who Made It_. It is a great *history* of science book which also engages many philosophical issues. Philosophers of science generally bore me now. I say this as someone who spent many years reading this stuff. Philosophers of science became boring once there arose a sharp division between them and actual scientists. This was not true of earlier philosophers such as the logical positivists and people like Russell. But the second half of the 20th century was hard on philosophy…on this issue you should check out the work of your Columbia colleague Jacques Barzun (“From Dawn to Decadence” etc).

But if you do read some of of these people, I would really like to get your thoughts on what Richard Miller says about Bayesians in his “Fact and Method”. Are they the modern logical positivists? Alas, I sometimes think so. One would think that the failure of Russell’s Principia Mathematica, Godel and all of that would have killed logical positivism, but it hasn’t…..

Cheers,
Jas.

From reading the comments on my earlier entries, I get the impression that my Popperianism follows the ideas of an idealized Popper. From Bob O’Hara’s quotes, I see that the actual Popper didn’t want to recognize probabilistic falsification. I remember when I read (or attempted to read) The Logic of Scientific Discovery 20 years or so ago, I skipped over the probability parts because they seemed full of utopian and probably unrealizable ideas such as philosophical definitions of randomness.

But the idea of falsification–and the dismissal of “inductive inference”–well, that part resonated with me. I’m on Popper’s side in not believing in “induction” as a mode of inference. I don’t mind “induction” in the sense of prediction and minimum-description-length (with more data in a time series, we should be able to better form an accurate prediction rule using fewer bits to describe the rule), but “induction” doesn’t fit my understanding of scientific (or social-scientific) inference.

I followed rjw’s pointer to the “Quine-Duhem problem” and I agree that, in Bayesian predictive model checking, we check the whole model at once, not its individual parts. Hal Stern, Xiao-Li Meng, and I thought a lot about this when doing our work on posterior predictive checks–are there settings in which we can test just part of a model–but didn’t come up with any general rules.

General agreement

Dan Navarro points out that real scientific progress can have a Kuhnian flavor: a paradigm is set up, researchers work within that paradigm, estimating parameters and falsifying little models within the paradigm, but it takes a big jump to move to the new paradigm where the data can be fit more reasonably (without the equivalent of epicycles).

This sounds reasonable to me.

But some disputes remain

The main point where I disagree with many Bayesians is that I do not think that Bayesian methods are generally useful for giving the posterior probability that a model is true, or the probability for preferring model A over model B, or whatever. Bayesian inference is good for deductive inference within a model, but for evaluating a model, I prefer to compare it to data (what Cox and Hinkley call “pure significance testing”) without requiring that a new model be there to beat it.