Where do theories come from?

Lee Sechrest sends along this article by Brian Haig and writes that it “presents what seems to me a useful perspective on much of what scientists/statisticians do and how science works, at least in the fields in which I work.” Here’s Haig’s abstract:

A broad theory of scientific method is sketched that has particular relevance for the behavioral sciences. This theory of method assembles a complex of specific strategies and methods that are used in the detection of empirical phenomena and the subsequent construction of explanatory theories. A characterization of the nature of phenomena is given, and the process of their detection is briefly described in terms of a multistage model of data analysis. The construction of explanatory theories is shown to involve their generation through abductive, or explanatory, reasoning, their development through analogical modeling, and their fuller appraisal in terms of judgments of the best of competing explanations. The nature and limits of this theory of method are discussed in the light of relevant developments in scientific methodology.

I found this very difficult to read and forwarded it to Cosma Shalizi, who writes:

Like a lot of what I read about abduction, it seems much more a theory (or sketch of a theory) of how scientists think, than of scientific method. Put another way, the H-D account of scientific method has always tended to “black-box” the issue of where hypotheses come from, in favor of what to do with them once you have them. I think this is usually helpful, but there’s no reason not to try to open up the black box, and study the origin of hypotheses; if there’s a role for abduction, it’s there, in explicating the “generate” part of generate-and-test. (In fact, if memory serves, Peirce later repented of his term “abduction”, and just called it “hypothesis” or “hypothesizing”.) If one could show, or even plausibly suggest, that certain modes of hypothesizing are systematically more reliable or fruitful than others, that would be extremely valuable. This paper in particular seems to have some odd confusions of levels between what are presumably fairly permanent parts of how scientists think (analogy), and current technological artifacts — I love bootstrapping, but it hardly belongs in the same category as a component of scientific method. (And as for stem-and-leaf plots…)

Sechrest wrote:

Too much research seems to be addressed to determining whether “it is,” or “it isn’t.” The more important question very often is “Why is (or isn’t) it?” To me, abduction seems more likely to occur in the aftermath of having seen something. Isaac Asimov once said The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny…’

And that is when abduction begins, the attempt to identify explanations and reason toward the best one. For example, at least some drug trials begin with seemingly sensible expectations that a drug will work; those expectations are often wrong. Usually that is the end of the matter. But, to me, an important question may well be “Why didn’t the drug work as expected?” (A “why isn’t it? question) The abductive process cannot lead directly to a clear-cut answer, but it can get us closer. And that is what, I think, good scientists do. Ineffective scientists (and I have seen them many times) say, “Well, that didn’t work. Anybody got another idea?”

I have nothing to add to the above discussion, except to point to our recent discussion of the challenges of systematizing model building. As I see it, new ideas arise from anomalies in data with respect to existing theories.

12 thoughts on “Where do theories come from?

  1. There are two parts to finding theories:

    (I) Find a set of relevant variables A, B, C, … .
    (II) Then finding out how they are related to each other F(A, B, C, …) =0.

    The second step is almost always many orders of magnitude easier than the first. If, for example, economists had variables C, D, … such that “GDP, inflation, C, D,…” were a good set of relevant variables, I have no doubt they would find the function F(GDP, inflation, C, D,…)=0 connecting them in short order and the subject wouldn’t be so ridiculous. Macroeconomists major unsolved problem is that they haven’t found C, D,… yet.

    So step (I) finding the set of relevant variables is where all the theory-formation action is. And Statistics has much more to say about this step then is commonly realized. In a way it’s amazing that you can ever find a small set of variables that are related to each other functionally. The only way a relation like F(A, B, C, ….)=0 can hold in practice is if almost any state of universe leads to such a connection. If that weren’t true, then as the state of the universe changed you’d see that sometimes the relation held but sometimes it didn’t (i.e. the theory would be non-reproducible).

    In other words, if you think of the probability of a relation like “F(A, B, C,…)=0” as the ratio of the number of states which make this relation hold to the total number of states, then this relation must be “highly probable”. So finding a set of relevant variables A, B, C, … amounts to finding variables that lead to “highly probable” relations. That’s one way to answer to step (I). Note that the sum and product rule of probability theory are the key tools needed for counting the number of states and propagating those counts correctly.

    • Stereotypes aren’t hypotheses, they’re predictions. A stereotype doesn’t explain the outcome. And no, “some gene that might exist determines IQ” is not a useful explanation since it doesn’t specify a specific mechanism that can be used to design a study which can separate the causal effect from confounding. Some people have trouble differentiating hypotheses and a predictions, and what they engage in is not science (social or otherwise).

  2. Peirce’s triad of abduction : deduction : induction is awkward and he once referred to himself as a triadomaniac.

    I often just put it as might be : must be : should be (with should meaning tentatively held as least wrong) or just 1,2 3.

    And it always, as someone put it, done in a dance of 1,2 3, -> 1,2 3, -> 1,2 3 … (and on different levels, one level of might be, two levels of must be and three levels of should be – level two of 3 being statistics with random sampling)

    For Cosma’s point, Peirce often changed his terminology but what might be interesting is that he claimed once you thought about a might be (abduction) its gone (well because you have started to think about what that might be would imply – the must be – and asses if its too wrong – the should.

    So I think (have not read this material since I prepared a talk on it for Oxford Stats Dept in 2002) Peirce end up thinking it was pre-conscious (something we evolved to do automatically without awareness of how).

  3. I just came across this. I suppose I’m not too surprised that Peirce is being misunderstood, and that THE most fundamental points Peirce reiterated on these matters are not mentioned. Abduction/inference to the best explanation and the like do not give clues about where theories come from, true. But Peirce’s account of inductive testing does! (My own error statistical account owes much to Peirce.)

  4. ” If one could show, or even plausibly suggest, that certain modes of hypothesizing are systematically more reliable or fruitful than others, that would be extremely valuable.”

    Isn’t this precisely what TRIZ addresses, at least for certain classes of problems?

Comments are closed.