Medical decision making under uncertainty

Gur Huberman writes:

The following crossed my mind, following a recent panel discussion in which David Madigan participated on evidence-based medicine. The panel—especially John Iaonnidis—sang the praise of clinical trials.

You may have nothing wise to say about it—or pose the question to your blog followers.

Suppose there’s a standard clinical procedure to address a certain problem. (Say, a particular surgery.)

A physician has an idea for an alternative which he conjectures will deliver better results. But of course he doesn’t know before he has tried it. And he cannot possibly be aware of all the possible costs to the patient (failures, complications etc.). But then, when experienced, this physician and others may improve the procedure in the future.

How should he go about suggesting the alternative procedure to a patient?

The question applies to pilot patients—the first ones—and, assuming that the procedure was successful on a handful of pilot patients (by what criteria?), the question applies to setting up a clinical trial.

My reply: One thing I’ve written on occasionally is that I’d like to see some formal decision analyses, balancing the costs and benefits to the existing patients of trying an experimental treatment, along with the larger costs and benefits to the population if the new treatment is deemed effective and used more generally. I’d think this sort of calculation would be essential in deciding things such as rules for when to approve new treatments. But I’ve never really seen it done. I’ve seen some decision analyses regarding screening for diseases (whether screening should be done, who should be screened, how often should screening be done, etc) but not on the question of when to approve a procedure, when to declare victory and say everyone should get it, when to declare defeat and not try it on people.

35 thoughts on “Medical decision making under uncertainty

  1. Not to give too much away, but this is the general type of decision I’m working on modeling for my dissertation. (Different applications.)

    If you have a structural model of the effect, with reasonable priors on the various factors, you can do preposterior analysis. Using standard techniques from statistical decision making, following DeGroot, you can find optimal decision policies. I’m hoping to use STAN similar to what Diaz and Frances did in bugs. (Doi:10.1287/ited.2013.0120)

    The tricky part, especially in this type of situation, is defining your value function and your priors. By picking those unfairly, you could easily dictate the decision. In medical work, given enhanced sensitivities around patient centered decision making and power imbalances, I’m uncertain how this would be navigated.

  2. I can’t remember where I read it, but I recently came across an article describing a paper suggesting that the evidence needed for new treatments to be used (or maybe approved) should be linked to the prognosis for the disease. Currently the FDA, or whichever body it is, sticks to the usual ‘we want p<.05'-type rule but the authors say that treatments for something like pancreatic cancer, which has a 6% survival rate, should be approved if the p value is much less significant, like .3. Other diseases with better outcomes would need stronger evidence before moving forward. The article said that the FDA does take that into account somewhat in their process, and I'm sure they can do better than p values, but at least it sounds like someone is trying to run numbers on the issue.

    • I think the better framing is how we conduct a value function for, say QALYs in cases of terminal diseases; we may be less risk averse in those cases. Perhaps a 1% chance of 10 QALY improvement can justify a 99% chance of -1 QALY from failed treatment, despite a negative expected value. (This justifies the treatment for an individual; we still need to account for value of information gained, and optimal stopping as we find better bayesian estimates of effect sizes over the set of cases.)

      • QALY’s are kind of irrelevant, if your medicine works like a “parachute”. For prostate cancer, QALY’s are very important since treatment is invasive and the duration from onset of disease to death by prostate cancer is long (so you could die of other causes). In pancreatic or lung cancer intervention vs no intervention is almost like “jumping out of a plane with parachute vs no parachute”. In such a situation QALY’s are not that important, although they are always important if you are comparing different drugs with similar efficacies…

  3. There is a lot of interest and work on this kind of question under the general rubric of “Value of Information analysis”. Those interested may want to look at the journal Medical Decision Making (published by the Society for Medical Decision Making; full disclaimer: I am the editor-in-chief of this journal) or Value in Health (published by the International Society for Pharamacoeconomics and Outcomes Research).

    • See for example Losina E et al. Defining the Value of Future Research to Identify the Preferred Treatment of Meniscal Tear in the Presence of Knee Osteoarthritis. PLoS One. 2015 Jun 18;10(6):e0130256. doi: 10.1371/journal.pone.0130256. PMID: 26086246.

    • Alan: I always find this sort of thing interesting.

      An early instance of what is now called “Value of Information analysis” was Yates, F. (1952) “Principles governing the amount of experimentation in developmental work” which David Cox once was said was Yates attempt to successfully argue for more funding for the Rothamsted Experimental Station.

      Now doing this same work for clinical trials was my first statistical job under the direction of https://en.wikipedia.org/wiki/Allan_S._Detsky and although the work was sophisticated for the time (1985), using empirically based priors and decision analyses which included adoption rates of “proven” treatments – very little was published.

      When I pointed out the Yates paper to him – he did seem disappointed.

      Now at some point in time it got re-branded as “Value of Information analysis” which from your comment it seems to still be popular among some academics.

      Finally, the interesting point (which I think Andrew is pointing to) – why aren’t these consideration top of the mind among statisticians and researchers – they should be – but for some reason it remains an often neglected topic.

      • speaking as a young professional statistician,

        From my coursework, I’ve only encountered decision analysis when it came to assessing estimators, minimax, bayes risk, etc.

        In a non-academic setting, formal risk analysis actually has some sort of assignable money that comes into play.

        These type of cost outcome scenarios just didn’t find a place in the statistics curriculum. The stats dept is also quite progressive.

      • I think there are two reasons why there are not very many VOI analyses:

        1. Until recently it was pretty difficult computationally. To calculate the outcome we would like to know — what is the improvement in expected wellbeing generated by a hypothetical study (known as EVSI within the field) — you were running a huge number of simulations, generally involving a Monte Carlo simulation in which each step required you to update the posterior, which you might not have in closed form, so you have a MCMC going on in each iteration of your outer loop. Today’s computing power can help, but your model might be pretty complicated, so you can still hit a wall. However, in the last 12 months a number of approximation approaches.
        This is mine: http://mdm.sagepub.com/content/early/2015/04/24/0272989X15583495.abstract
        This is another which is even more straightforward to implement: http://mdm.sagepub.com/content/35/5/570.abstract
        So computing problems are no longer a problem.

        2. In many cases it is hard to describe all the possible ways in which a particular research result might affect decision-making (and in so doing influence wellbeing). The standard approach is to focus on the implications of one or several different study designs for a single policy decision. But in reality we should be considering the influence of our candidate study design(s) for *all* the policies decisions they might influence. And that would be a hell of a job — could you even create the list of decisions that might be affected? Maybe for some select examples, but I think it is a huge challenge.

        If it were just for the first problem, the field of VOI might change in the same way Bayesian stats has over the past 30 years –> from a bunch of theoretical papers when the calculations were just too awful to consider, to a bunch of applied papers after computing power relaxed that constraint. However, until the second problem is resolved I don’t think current VOI approaches answer the questions we want them to answer.

        • > 2. In many cases it is hard to describe all the possible ways
          Just leave out the stuff that’s not important and make sure to included what is important ;-)

          Does seem a general problems of representing reality – the value of representing the reality as best one can depends on many things.

  4. As Alan notes, these questions are what ‘Value of Information’ analysis is intended to address.

    A ‘lay’ introduction from Health Affairs: http://content.healthaffairs.org/content/24/1/93.full

    An application of VOI: http://www.sciencedirect.com/science/article/pii/S014067360209832X

    Influential article from 2004: http://mdm.sagepub.com/content/24/2/207

    This editorial touches on some of the barriers to uptake: http://mdm.sagepub.com/content/35/5/564.long

    • I disagree that this is simply VOI – though that is the place to start. At the very least, this is also an embedded optimal stopping problem, for when we should no longer try the treatment. Further, in this type of decision, we need to account for multiple factors, such as alternatives, and prior beliefs. This requires a more general preposterior analysis framework using statistical decision theory, as explained by Pratt and Raiffa. Unfortunately, these methods get mathematically intractable using the closed form methods that they discuss, especially when we have model uncertainty and structural uncertainty – and they would benefit from a numerical bayesian inference engine, like BUGS, JAGS, or STAN, which are not typically used for this type of work. (Yet- growth mindset!)

      As an additional problem, the general VOI farmework doesn’t allow ethical decision making. To explain, we might have a risky treatment for a minor ailment that may have side effects. We can find that there is positive VOI from trying a treatment on an individual, but that individual has negative expected gain from it. That makes the trial unethical, even if it has a net societal gain.

      • I’m not sure if it matters whether we call ‘it’ VOI or something else, but this is what VOI is attempting to do. There is no shortage of papers using BUGS/JAGS in this field – see, eg, some of Mark Strong’s work at Sheffield. The article linked to above by Thom and Welton touches some of the computational issues.

        I don’t follow your ethical problem. If we knew that individuals receiving a treatment would have a negative expected benefit, how would there be a societal benefit? VOI is about the value of obtaining more information, part of which is about the type of information, and the means by which we would collect it – I don’t believe VOI would lead anyone to perform an unethical RCT (intervention very likely to cause serious detriment to to participants) purely for the sake of conclusively agreeing that the intervention is harmful

        • I hope you don’t mind me jumping in midstream.

          The assumed societal benefit would be for those suffering from the disease in the future who might be helped by the research being conducted now. I am not sure I understand why you would think an incentive to increase the information gained from the study would not lead to decisions to gather additional information even if there are negative effects on participants?

        • I was guessing at the ethical problem being described by David, and perhaps I’ve misrepresented it. The strawman I put forward to attempt to understand it where VOI would be low, because we were confident, ex ante, the intervention would be detrimental, so any individual participating in eg an RCT (David mentioned trials) would be harmed. A different issue is one commonly encountered in RCTs (to stick with that means of collecting more information) which is that participants are exposed to some risk, and some patients won’t benefit. This in itself does not constitute a set of reasons not to proceed with an RCT, so again I don’t see how VOI would be inconsistent with ‘ethical’ decision making. But I’m sure David will clarify the confusions I’ve probably just caused by trying to think about that point!

        • “This in itself does not constitute a set of reasons not to proceed with an RCT”

          This is a strong statement and one on which I do not think there is unanimous agreement. At what point does the level of negative effect become unethical? I cannot imagine it is ‘never’.

        • Many RCTs in healthcare involve a degree of risk to participants. I doubt there has ever been a pharmacological intervention in humans evaluated in an RCT that was without risk. Setting aside the issue of risk, there is no rationale for undertaking an RCT if the effects are known with certainty. So RCTs are therefore about balancing risks to individuals from harm, and accepting that many people randomised to the intervention will receive no benefit.

          Ethics committees have the task of deciding these limits. The risk that a committee might accept in a trial of late stage metastatic cancer treatment is likely higher than the risk acceptable in a trial of a less consequential disease area.

        • It sometimes seems that those trying to minimize the potential for harm prefer using the term ‘no benefit’ rather than acknowledging that there are two distinct sets of participants who receive ‘no benefit’ : 1. Those who receive no benefit and are unharmed, and 2. Those who receive no benefit and are harmed. There is a reason ‘side effect’ reporting is mandatory for RCTs.

        • James said:”It sometimes seems that those trying to minimize the potential for harm prefer using the term ‘no benefit’ rather than acknowledging that there are two distinct sets of participants who receive ‘no benefit’ : 1. Those who receive no benefit and are unharmed, and 2. Those who receive no benefit and are harmed. There is a reason ‘side effect’ reporting is mandatory for RCTs.”

          +1

        • If I have a weak prior that the net effect of a medication is slightly negative (reasonable, given that most don’t make it through clinical trials), I can still have a positive VOI from confirming or eliminating the potential that it is beneficial.

        • OK, but in what practical circumstance is that kind of reasoning likely to be relevant?

          “Hey guys, I have a drug I (weakly) believe to be harmful, so let’s try it out just to be sure”

          I don’t think that that is justifiable ethically, and I don’t think it reflects the decision making process during drug development. If most drugs don’t make it through a full set of staged trials, there is nevertheless a belief at the start of this process that there is some potentially some merit. This belief will persist until there is a reason to abandon it. After the event it might be known that the net effect is slightly negative, but ex ante I don’t see how they situation you describe would ever arise.

        • It arises when the bayesian prior on the net effect is broad. If you have a narrow prior that on some scale the net effect is normal(-1,0.2) for example, you’re not likely to do much with that drug, but if your prior is normal(-1,15) then you can interpret that as overall on average you think the effect is negative, but you are willing to entertain possibilities from very negative (-30 or so) to very positive (+30) for example. The value of information comes in when you narrow things down so that you are able to ignore or pursue the drug further with better information.

  5. One place where formal cost/benefit analyses are seldom done and where we rely unfortunately on heuristics with dubious connection the Cs and Bs is in the corporate world. Why, might you ask? Because in corporate analytics, time is of the essence, and defining Cs and Bs takes time. So there is often a tradeoff between a rigorous estimate of the effects of treatments on the key performance indicators on the one hand, and a rigorous definition of the links between KPIs and utility on the other. In my experience, we’re either rigorous one or the other, rarely both. I really wish the norm was to be minimally rigorous about both. I think this tradeoff is analogous to the one in academia between measurement and parameter estimation as the technical focus.

    • Here’s the problem with the corporate tendency to worship at the altar of ‘speed’: We end up doing the wrong things and making the wrong decisions really fast which accelerates and exacerbates the very problems we are trying to solve. Speed is important in many areas of business operation and even in time sensitive decision making. That said, getting the analytics right is essential and there should be people working on the difficult but ‘slow’ parts as well as people working on the ‘fast’ parts. The real problem is thinking the people working on the daily processes can be the same people working on getting the analytics right. There are only so many hours in the day. Companies need both and simply giving new software to more people is not a way to solve the analytic problems most companies would benefit from solving.

      • To expand a bit. I do think the tradeoff mentioned by Brash is correct, but I think it is also important to remember that if we think we are measuring ‘a’, but are actually measuring ‘b’ we should not be surprised when we do not get the expected effect from our implementation of ‘a’.

  6. “A physician has an idea for an alternative which he conjectures will deliver better results. But of course he doesn’t know before he has tried it. And he cannot possibly be aware of all the possible costs to the patient (failures, complications etc.). But then, when experienced, this physician and others may improve the procedure in the future.

    How should he go about suggesting the alternative procedure to a patient?”

    Medical pilot studies have always worked something like this: The physician has an idea, which is sometimes kept secret but hopefully shared and discussed with colleagues. Most preferably in public literature. It is generally frowned upon that an idea goes right from one physician’s mind into general practice. The novel procedure is then attempted when the standard procedure cannot be used on a patient for some other reason (lack of supplies in an emergency, some co-morbidity, abnormal physiology, etc).

    Clearly this pilot study will not be representative. However, it will allow preliminary assessment of failure modes, build experience implementing the procedure, alert us to possible harmful side effects, and give a sense of what degree of effectiveness may be possible. Essentially all the goals of a pilot study are achieved with no ethical issue. At least we know the procedure doesn’t kill everyone right away, etc.

    • I believe that this is often referred to as “n of 1” studies. However, I’m not convinced that there is “no ethical issue” — such “pilot studies” can be done in ways that are ethical (e.g., fully informed consent) or unethical (e.g., no or inadequate informed consent).

        • I must have written something in a strange way. Of course, if possible, the procedure must be signed off on by the institution. This is true whether it is standard or not. However there are emergency situations where the physician is given more leeway (battlefield, scene of an accident, etc). I don’t understand why what I described is related to that.

      • It is essentially waiting for natural experiments to occur. I think of n=1 studies as following one person over time and assessing what happens after a series of alternative interventions. Informed consent is either no more of an issue than for the standard care (discussing treatments for someone with some abnormal physiology) or not a factor (soldier on a battle field).

  7. Answer to the question in the main post — money! Give the patient honest odds and then buy his risk aversion. Usually, I understand, medical profession does not pay patients for experimental surgeries, as opposed to drug trials, but there is always possible to find some uninsured or underinsured patient for whom free surgery and post-surgical care will be enough of a payment.

Leave a Reply to David Manheim Cancel reply

Your email address will not be published. Required fields are marked *