P-p-p-p-popper

Seth writes:

I [Seth] have always been anti-Karl Popper. His ideas seemed to point in the wrong direction if you wanted to do good science. For example, his emphasis on falsification. In practice, quite often, I don’t “test” theories, I assess their value — their value in finding solutions to problems, for example. When I use evolutionary ideas to suggest treatments to try, I’m not testing evolutionary theory. Nothing I know of Popper’s work shows any sign he understood this basic point. As someone has said, all theories are wrong but some are useful.

I’ve discussed Popper quite a bit on this blog already (starting here) but wanted to add one thing to clarify, in response to Seth’s remark.

What’s relevant to me is not what Popper “understood.” Based on my readings, I think Lakatos understood things much better, and in fact when I speak of Popperian ideas I’m generally thinking of Lakatos’s interpretation. (Lakatos himself did this, referring to constructs such as Popper_1 and Popper_2 to correspond to different, increasingly sophisticated versions of Popperianism.)

What’s relevant to me is not what Popper “understood” but what he contributed. I think his ideas, including his emphasis on falsification, have contributed a huge amount to our understanding of the scientific process and have also served as a foundation for more sophisticated ideas such as those of Lakatos.

When considering contributors to human knowledge, I think it’s best to take an Earl Weaver-esque approach, focus on their strengths rather than their weaknesses, and put them in the lineup when appropriate. (As the publisher of two theorems, one of which is true, I have a natural sympathy for this attitude.)

Regarding the specific question of how Popper’s ideas of falsification relate to applied statistics (including the quote at the end of Seth’s comment), you can take a look at my 2003 and 2004 papers and my recent talk. The basic idea is that, yes, we know our models are wrong before we start. The point of falsification is not to discover that which we already know, but rather to reveal the directions in which our models have problems.

12 thoughts on “P-p-p-p-popper

  1. Depends on who is the master – the writer or the reader (with apologies to Humpty-Dumpty).

    Outside math, we should know everyone will be wrong and try to learn from their particular _wrongness_

    K

  2. Thanks, now I'll think of Lady Gaga every time someone mentions Popper. Ironically, I apply to Popper the same motto Seth used: Popper may have gotten everything wrong, but some of his ideas are useful. It bears notice that Vapnik reported that Popper inspired him greatly in conceiving Statistical Learning Theory. More recently, he concluded that SLT is not exactly Popperian, but I think this story remains as a sign that Popper's ideas are still fruitful.

  3. Popper's ideas are so widely quoted and seem to allow such easy paraphrase that many seem to regard it as unnecessary to read him.

    Two ideas are often less appreciated than they deserve to be. First, Popper's emphasis on formulating theories in their strongest possible form for discussion undermines the image of him as advocating destructive criticism to the exclusion of other kinds of thinking. Second, that is also true of his emphasis on the best kind of refutation being one in which you can reject a theory by putting a better theory in its place.

    "The logic of scientific discovery" is perhaps the best known, or the most widely heard of, of his works as far as many of the readers of this blog are concerned, but later works such as "Objective knowledge" are often more rewarding.

    I'm surprised that the similarities between Popper and C.S. Peirce have not been mentioned yet.

  4. Popper simply talks about theories, methods being able to "divide" a dataset enough to be able to say something clearly. We encounter this in machine learning theory all the time; being able to "shatter" a hypthesis space for example. I guess most of what he says comes to natural to most people now, but when he talked about them, how many people truly understood how science worked? Basically what I got from Popper is a theory should not veasel way its way out of every problem, it should take a clear stand, and face the consequence of being false. That is the test of a good theory.

  5. "I think it's best to take an Earl Weaver-esque approach, focus on their strengths rather than their weaknesses, and put them in the lineup when appropriate."

    That's a great line!

  6. I have read both Logc of Scientific discovery and Objective knowledge and enjoyed both.

    I have read Lakatos.
    But i do think Popper understood everything very well, although I agree with Andrew that does not matter. What matters is the step forward we take after his ideas.

    I think the point Seth is making is really to do with pure science vs technological science.

    It is the difference between getting an estimate of evolutionary rate and saying this estimate is nsd from 500BC & therefore Evolution is not true. Popper's point is then to say fine – but not you need to look for a fact (an obs with a 0,1 truth value) one theory predicts and the other doesn't.

    A lot science (esp. Biology) is about the relative sizes of possible causes and this is much more useful than proving that in species X such&such never happens.
    This gives the more complex sciences less chance for paradigm changing as with the theory of relativity, say.

    In principle such measurements often *could* falsify a theory, but most of the time they do not and we lose sight of that fact.

  7. Nick – believe Brent's biography of CS Peirce has a quote from Popper acknowledging Peirce's influence in his work.

    Would not be surprised if most of what Popper did was in Peirce's writings (but Popper would unlikley have had the same access we have today)

    Two other points

    1. As Andrew points out here, often people find deficiencies in someone's work (or model) and then dismiss it almost totally. Peirce's admission to being a fallibalist did not protect his work from that.

    2. When, for instance Ian Hacking claims Peirce developed a philosophically sound theory of Neyman-Pearson confidence intervals, or Steven Stigler (perhas mostly in conversation) claims Peirce developed Fisher's randomization test – others are flustered when not being pointed to a single well developed paper. Unfortunately, Peirce's works are scattered and incomplete and so many different papers and different versions of the same paper need to be carefully (and error pronely read)

    K
    p.s. Nick would you know of an accessible introduction to Peirce you could post?

  8. I am not very well read on Peirce. My impression is that his semi-popular essays are just as clear and engaging as any second-hand introduction or survey of his work.

    Beyond that, I am not clear that accepting Popper or Peirce as having something useful to say about the importance of criticism implies that the creative and critical need be balanced in the same proportions in each of us. Manifestly, critical skills vary just about as much as anything else. But socially, someone who is very talented at spotting weaknesses in just about anything usually has a hard job at making an intellectual career or maintaining good relationships with others in their own profession.

    More optimistically, there is perhaps something to the idea that everyone pushing their own ideas as hard as possible is precisely the best way for everyone to find out the limitations of those ideas, in due course, except that this can take a long while! I guess that's part of what Kuhn and Lakatos argued.

  9. I can recommend some beginning Peirce and also give you some guidance on where Peirce anticipates confidence intervals (for binomials). The best simple collection of Peirce materials is The Essential Peirce in two volumes (the link is to the first on Amazon). There are better collections and some hard to find pieces scattered about, but this is a great start for only about $25 a volume. What you'll want to read first is Peirce's book Illustrations of the Logic of Science, which was serialized in six papers/chapters (in Popular Science of all places) in 1877-1878. Specifically, consider Section IV of the paper/chapter "The Probability of Induction," where Peirce considers drawing balls from an urn filled with (a finite number of) white and black balls. He writes:

    "As we cannot have an urn with an infinite number of balls to represent the inexhaustibleness of Nature, let us suppose one with a finite number, each ball being thrown back into the urn after being drawn out, so that there is no exhaustion of them … It is found that, if the true proportion of white balls is p, and s balls are drawn, then the error of the proportion obtained by the induction will be–

    half the time within 0.477*sqrt(2p(1-p)/s)
    9 times out of 10 within 1.163*sqrt(2p(1-p)/s)
    99 times out of 100 within 1.821*sqrt(2p(1-p)/s)
    999 times out of 1,000 within 2.328*sqrt(2p(1-p)/s)" (EP vol. 1, 165-166).

    Peirce actually carries the table out a little further, but you get the idea.

  10. Nick: nicely put "someone who is very talented at spotting weaknesses in just about anything usually has a hard job"

    Someone recently put this as individuals when you first hear some of their questions, you think "they a are very thoughtful", but on hearing a few somemore "you don't want them anywhere near anything you are working on!"

    K

  11. Thanks Jonathan – would you happen to know of any entries that involve comparisons (especially randomized ones) – other than the Jastrow perception experiment?

    K

  12. K,

    I'm not entirely sure that I understand your question. But I'll attempt an answer anyway. Peirce makes comparisons in a lot of places, but usually in the scientific work he did (mostly gravimetric and photometric) for the U.S. Coast and Geodetic Survey (now the NOAA). Specifically, see "On the Ghosts in Rutherford's Diffraction-Spectra" and "Measurements of Gravity at Initial Stations in America and Europe," both of which are in Volume 4 of a new-ish edition of Peirce's Writings. I don't have the other volumes on hand to look for further examples, but my guess is that any of the later volumes will have examples of what you are looking for.

    As to randomization, Peirce says a good deal about it in theoretical contexts. You might look at Section III in "The Order of Nature," which is the fifth chapter/paper in Peirce's Illustrations and can be found in the first volume of the Essential Peirce.

    That said, I recommend looking at a very nice paper titled "<a href="http://books.google.com/books?id=V7oIAAAAQAAJ&printsec=frontcover&dq=peirce+johns+hopkins+logic&cd=1#v=onepage&q=&f=false&quot; rel="nofollow">A Theory of Probable Inference" which is free through Google books. The paper is part of a collection titled Studies in Logic by Members of the Johns Hopkins University, which Peirce edited and contributed to in 1883. Section VII is especially instructive, and I think it bears on your question.

    If that doesn't turn out to be helpful — doesn't answer your question — or if you want to talk through something specific, or if you just want to share something interesting that you find, feel free to send me an email at jlive2003 at gmail dot com.

Comments are closed.