Why do we never see a full decision analysis for a clinical trial?

Peter Thall writes:

Some years ago, after I gave a talk at Columbia that you attended, you told me that you would like to see a decision-theoretic analysis formulated and carried out to completion by Donald Berry, who had been quite vocal for some time about the importance of such a “fully Bayesian” analysis.

I do not work with Berry. But, in recent years I have begun to do utility-based clinical trial design. The trial described in the attached paper [by Thall and Hoang Nguyen] enrolled its first child very recently. While the methodology is not terribly sophisticated, I consider this to be one of the most ethical trials that I have designed. The utility was elicited from the two trial PIs. When we had the pre-trial start-up meeting, a third oncologist looked hard at the utility table, and he said that he agreed completely with the numerical utilities.

I doubt that this trial will cure this type of brain tumor, but I do think that the design gives the children a better chance than they would get otherwise.

Still, I think we can improve this methodology by adding some refinements, and we are working on them now.

I indeed have been long interested in seeing a formal analysis balancing the goals of medical research, reducing future mortality and morbidity, saving cost, and reducing the risk to experimental subjects. The risks to present patients are balanced by potential future gains for the lifetime of the new procedure. But these tradeoffs always seem to be implicit. It’s hard to find an example where the costs and benefits are quantified. So I appreciate that Thall sent me this note.

8 thoughts on “Why do we never see a full decision analysis for a clinical trial?

  1. One reason decision theory is not used more often is lack of agreement on utilities. I’ve never seen anything bring up such petty squabbles. And when people cannot agree on an explicit utility function, they settle by default on an implicit utility function chosen for mathematical convenience in the derivation of a standard method. It may be that the explicit utilities were all closer to each other than to the implicit utility that wins.

    In short, the problem with decision theory is political, not technical. Peter did well to not ask for too many opinions, and he was fortunate to get such agreement.

  2. I thought Peter Thall has been doing utility-based trials for years, with EffTox trials depending on a tradeoff between safety and efficacy.

    At any rate, the main resistance I’ve run into against utility-based decision-making is sponsors wanting not to be held to some decision process. I guess Peter deals more with investigators than sponsors, though.

  3. > formal analysis balancing the goals of medical research, reducing future mortality and morbidity, saving cost, and reducing the risk to experimental subjects

    That was the first NIH grant I was on and why the group worked on meta-analysis methodology (to get empirical priors for effect sizes). The PI for some reason never published much from it.
    We did provide the NIH with a program (might still be available) that calculated the cost-effectiveness based on for instance the impacts of various sample sizes on the costs and effects in experiments and the eventual total treated population using assumptions on adoption rate, life span of treatment, etc.

    The main finding was that what happened in the experimental trials was swamped by impacts on treated population and so it was always cost-effective to use large sample sizes (an initial paper title was “Clinical trials are cheap!”)

    I can’t remember if we put (ethical) bounds on the risks patients in the trials could be exposed to (maximum allowable negative utilities in the experiments), but that would have been a good idea (and that’s were all those arguments would come up!).

    One thing that is very tricky, is that if the methods are perceived to be too novel and not well understood, the adoption of an effective treatment can be slowed down considerably with a huge negative impact on patients and costs in the (should be now) treated population.

  4. Is it perhaps part of the over-emphasized (and often fallacious) demand for science to remain objective that makes people squeemish about doing decision analyses?

    It seems to me that traditional significance testing actually serves as an absurdly crude substitute for decision analysis. Logically, science wants to update probabilities. The idea of using some sharp criterion to decide whether or not a discovery has been made has no logical basis. Binary statements such as those that the significance tests want to provide have use only for determining courses of action. (And by the way, those statements should be not of the form ‘A is true,’ but rather ‘we should act as if A is true.’)

    It seems that in more ways than one, the frequentist pioneers wanted to have their cake and eat it: while they claimed that the probability for the truth of a hypothesis does not exist, they proceeded to try to use probabilistic techniques anyway. Similarly, they insisted that science must be fully objective (negating the possibility to produce a loss function), yet they tried (very unsuccessfully) to bake the decision statement directly into their analysis, by means of the significance criterion. Is the lack decision analysis a legacy of this deep philosophical confusion?

  5. Decision analysis, often based on a (reasonably) “full” Bayesian analysis with costs and utilities, forms the basis for many re-imbursement decisions by the National Institute of Health & Clinical Excellence (NICE) in the UK. (The NICE Appraisals Committees have been described as “death panels” by certain politicians in the US …). Some of these analyses are built around single trials, although it is more usual to see an “evidence synthesis” approach.

    There is an accepted method for assigning utilities to health states. This is recognized as imperfect, but “squeamishness” has long been overcome.

    Furthermore, there is a growing literature on the use of Expected Value of Information theory applied to research prioritisation and to trial design, which builds on the same approach to utilities. This was introduced into the environmental health risk literature by Kim Thompson and then into medical decision making, especially, by Karl Claxton.

    True, not many trials have been designed based on these decision-theoretic ideas – perhaps because the ethical basis for randomisation based on clinical “equipoise” can no longer apply. But there are certainly cases where proposed trials have NOT been funded because EVI analyses had shown there would no Expected Net Benefit.

    • Thanks Tony.

      Here though it might be worth distinguishing “getting evidence that something is positive rather than negative” from having “evidence that something is positive but asking is it worth the costs/side effects”. (I believe NICE focusses mostly on the second.)

      Once you are asking if something that has evidence of a benefit is worth it, decision analysis becomes seen as a necessary evil that has to be endured (except in certain US defined benefits where such an cost/benefit analysis is prohibited by law.)

      On an historical note, David Cox said the Yates did this kind of sample size decision utility analysis pre 1950’s to try to get more funding for Rothamsted Experimental Station (an outline is given in Cox and Hinkley, Theoretical Statistics, staring on page 451.) So, I think Andrew has good reason to wonder why it has not been used that much.

Comments are closed.