“To find out what happens when you change something, it is necessary to change it.”

From the classic Box, Hunter, and Hunter book. The point of the saying is pretty clear, I think: There are things you learn from perturbing a system that you’ll never find out from any amount of passive observation. This is not always true–sometimes “nature” does the experiment for you–but I think it represents an important insight.

I’m currently writing (yet another) review article on causal inference and am planning use this quote.

P.S. I find it helpful to write these reviews for a similar reason that I like to blog on certain topics over and over, each time going a bit further (I hope) than the time before. Beyond the benefit of communicating my recommendations to new audiences, writing these sorts of reviews gives me an excuse to explore my thoughts in more rigor.

P.P.S. In the original version of this blog entry, I correctly attributed the quote to Box but I incorrectly remembered it as “No understanding without manipulation.” Karl Broman (see comment below) gave me the correct reference.

15 thoughts on ““To find out what happens when you change something, it is necessary to change it.”

  1. "This is not always true–sometimes "nature" does the experiment for you"

    Of course, once you get down to a small enough scale, it's always true (ie. there is no such thing as "passive observation"). And I believe that insight is from Heisenberg.

  2. Science Pundit:

    An example of passive observation is our work in Red State, Blue State. We compare the voting patterns of people of different income levels and who live in different states. There is no experimentation or manipulation of any sort.

    An example of an experimental study is Ansolabehere and Iyengar's Going Negative, where they showed different sorts of campaign ads to people and then measured the opinions exposed to the different ads.

    An example of a natural experiment is Greg Huber's study where he compared people in different cities and states who were exposed to different ads. There is manipulation but it is not done by an experimenter.

  3. The closest I could find was agronomist MJT Norman:

    However, whereas understanding without manipulation is a legitimate scientific activity (though not, I believe, in agronomy), for agronomists to manipulate without understanding is in most circumstances unproductive.

    It's obscure enough that it seems unlikely to be the right source, but just in case it leads you in the right direction…

  4. It's not so concise (or catchy) as "No understanding without manipulation", but how about:

    "To find out what happens when you change something, it is necessary to change it."

    (Box, Hunter and Hunter, Statistics for Experimenters; p 475 in 1st edition; pg 404 in 2nd edition)

  5. There might be something that can be extracted from this brief note on "purposefullness" in empirical research
    http://www.cspeirce.com/menu/library/bycsp/whatis

    Science Pundit: I liked DeSaussuers quip beter – "Economics is the science of how people value things – and if you write about that – you change they way people value things"

    he then left economics to study linguistics

    K?

  6. Earlier variants also, e.g.

    "To find out what happens to a system when you interfere with it you have
    to interfere with it (not just passively observe it)."

    George E. P. Box
    Use and Abuse of Regression
    Technometrics, Vol. 8, No. 4 (Nov., 1966), pp. 625-629 (quotation on p.629)

  7. "There are things you learn from perturbing a system that you'll never find out from any amount of passive observation. This is not always true–sometimes "nature" does the experiment for you–but I think it represents an important insight."

    I don't think I agree. 'Natural experiments' and treating observational studies as if they were experiments are basically just analogies, which are useful for seeing how you can apply certain mathematics to problems. But I do think if you design an intervention a priori and implement it you are in a very different epistemic situation to if you observe something and pretend it was actually an experiment post hoc. The tendency is to conflate the two situations, but I think they're very distinct.

  8. Then there's that quip by (I believe) Mosteller, which I paraphrase: "The alternative to experimenting on people is fooling around with people."

  9. Alex:

    What about fairly clean natural experiments such as the Vietnam draft lottery or various regression discontinuities? In these cases, the treatment assignment is known and is based on known factors. I think that a purely algorithmic treatment rule, even if it is not assigned by a researcher, still has a lot in common with an experiment in which treatments are explicitly assigned.

    I agree that in other instrumental-variable settings the "natural experiment" interpretation is not so clear. But it seems to me to be a sliding scale, or a slippery slope, not a sharp division between experiments and non-experiments.

    Mike:

    I like that Mosteller quote. I'd never heard it before.

  10. Mike: I also like the Mosteller quote but could imagine him also saying "The alternative to experimenting on people is fooling people into believing you did not have to."

    Also "a purely algorithmic treatment rule" if its _purpose_ was similar to random treatment assignment (e.g. fair assignment) would _pragmatically_ be the _same_.

    There is a large discussion of this (treatment alternation versus random assignment) echoing the arguments between Student and Fisher at the JamesLind Library – with a recent note by David Cox.

    K?

  11. Then there is this:

    What Social Science Does—and Doesn’t—Know: Our scientific ignorance of the human condition remains profound. Jim Manzi, City Journal 20(3), 2010
    http://www.city-journal.org/2010/20_3_social-scie

    . . .
    But clinical trials place an enormous burden on being sure that the treatment under evaluation is the only difference between the two groups. And as experiments began to move from fields like classical physics to fields like therapeutic biology, the number and complexity of potential causes of the outcome of interest—what I term “causal density”—rose substantially. It became difficult even to identify, never mind actually hold constant, all these causes.
    . . .

    Of this writes Gary Jones in his "Muck and Mystery" blog (http://www.garyjones.org/mt/):

    "This is an overly optimistic account due to ignorance. For example, the assumption of uniform biological response is deeply mistaken. It fails in static trials since variation is so high and utterly falls apart when systems dynamics are considered since unlike physical system biological systems rapidly adapt and/or evolve. The deadly dietary guidelines discussed recently are an example. Obviously, when free agency is layered on top of biological systems the even more rapid adaptation and evolution of intelligence utterly explodes experimental analysis. We will deal with this in physical systems too when they gain intelligence. Cat herding."

  12. Harold Jeffreys's statistical work grew out of deep and extensive studies in observational geophysics. His remarks on experiments express a complementary view:

    "[The] possibility of control over the initial conditions constitutes the difference between experiment and observation. It is a difference of technique, and not of principle." Scientific inference (Cambridge University Press, London, 1931 edition, p.208)

    "The control of the experiment is not a matter of principle; it is essential that, once the experiment is started, nature is left to take its course, and this is true whether the initial conditions are set by man, or whether he has to do his best with what nature provides." Scientific inference (Cambridge University Press, London, 1957 edition, p.192; 1973 edition, p.204)

  13. Anon: "being sure that the treatment under evaluation is the only difference between the two groups"

    Thats the purpose of _randomized_ clinical trials.

    And "uniform biological response" was an overly convenient/hopeful assumption that is "is deeply mistaken" and more and more being recognized as such (at least in some clinical trails circles)

    K?

Comments are closed.