The intervention and the checklist: two paradigms for improvement

I’m working on a project involving the evaluation of social service innovations, and the other day one of my colleagues remarked that in many cases, we really know what works, the issue is getting it done. This reminded me of a fascinating article by Atul Gawande on the use of checklists for medical treatments, which in turn made me think about two different paradigms for improving a system, whether it be health, education, services, or whatever.

The first paradigm–the one we’re taught in statistics classes–is of progress via “interventions” or “treatments.” The story is that people come up with ideas (perhaps from fundamental science, as we non-biologists imagine is happening in medical research, or maybe from exploratory analysis of existing data, or maybe just from somebody’s brilliant insight), and then these get studied (possibly through randomized clinical trials, but that’s not really my point here; my real focus is on the concept of the discrete “intervention”), and then some ideas are revealed to be successful and some are not (with allowances taken for multiple testing or hierarchical structure in the studies), and the successful ideas get dispersed and used widely. There’s then a secondary phase in which interventions can get tested and modified in the wild.

The second paradigm, alluded to by my colleague above, is that of the checklist. Here the story is that everyone knows what works, but for logistical or other reasons, not all these things always get done. Improvement occurs when people are required (or encouraged or bribed or whatever) to do the 10 or 12 things that, together, are known to improve effectiveness. This “checklist” paradigm seems much different than the “intervention” approach that is standard in statistics and econometrics.

The two paradigms are not mutually exclusive. For example, the items on a checklist might have had their effectiveness individually demonstrated via earlier clinical trials–in fact, maybe that’s what got them on the checklist in the first place. Conversely, the procedure of “following a checklist” can itself be seen as an intervention and be evaluated as such.

And there are other paradigms out there, such as the self-experimentation paradigm (in which the generation and testing of new ideas go together) and the “marketplace of ideas” paradigm (in which more efficient systems are believed to evolve and survive through competitive pressures).

I just think it’s interesting that the intervention paradigm, which is so central to our thinking in statistics and econometrics (not to mention NIH funding), is not the only way to think about process improvement. A point that is obvious to nonstatisticians, perhaps.

5 thoughts on “The intervention and the checklist: two paradigms for improvement

  1. Well, one reason for this approach in economics is the assumption in the discipline that there are no free lunches, so a "management consulting" approach shouldn't get much.

  2. Or its all put together as in
    "Looking inside the black box: a theory-based process evaluation alongside a randomised controlled trial of printed educational materials (the Ontario printed educational message, OPEM) to improve referral and prescribing practices in primary care in Ontario, Canada"

    In particular, studying how to get physicians to do what the evidence suggests they do is a currently a hot area of research involving all methods/approaches.

    My favourite case was a pilot for an RCT on a method to reduce surgery "no shows" where they telephoned prospective patients about their willingness to be in the study – and none of them missed their surgeries (i.e. a telephone reminder eradicated the problem). Now I am usure if the funds for the RCT were returned but the trial was deemed unnecessary.

  3. The checklist seems related to the business concept of "best practice" [ see http://en.wikipedia.org/wiki/Best_practice ].

    Common in both cases is the idea that if we applied existing knowledge in some coherent way, with a bit of monitoring whether we were actually doing stuff (checking it off as we do it), we could achieve substantial improvement even before further investigation.

    As the wiki article notes, "best practice" seldom has been demonstrated "best" — "better practice" would be a better name.

Comments are closed.