Hey—here are some tools in R and Stan to designing more effective clinical trials! How cool is that?

In statistical work, design and data analysis are often considered separately. Sometimes we do all sorts of modeling and planning in the design stage, only to analyze data using simple comparisons. Other times, we design our studies casually, even thoughtlessly, and then try to salvage what we can using elaborate data analyses.

It would be better to integrate design and analysis. My colleague Sebastian Weber works at Novartis (full disclosure: they have supported my research too), where they want to take some of the sophisticated multilevel modeling ideas that have been used in data analysis to combine information from different experiments, and apply these to the design of new trials.

Sebastian and his colleagues put together an R package wrapping some Stan functions so they can directly fit the hierarchical models they want to fit, using the prior information they have available, and evaluating their assumptions as they go.

Sebastian writes:

Novartis was so kind to grant permission to publish the RBesT (R Bayesian evidence synthesis Tools) R library on CRAN. It’s landed there two days ago. We [Sebastian Weber, Beat Neuenschwander, Heinz Schmidli, Baldur Magnusson, Yue Li, and Satrajit Roychoudhury] have invested a lot of effort into documenting (and testing) that thing properly. So if you follow our vignettes you get an in-depth tutorial into what, how and why we have crafted the library. The main goal is to reduce the sample size in our clinical trials. As such the library performs a meta-analytic-predictive (MAP) analysis using MCMC. Then that MAP prior is turned into a parametric representation, which we usually recommend to “robustify”. That means to add a non-informative mixture component which we put there to ensure that if things go wrong then we still get valid inferences. In fact, robustification is critical when we use this approach to extrapolate from adults to pediatrics. The reason to go parametric is that this makes it much easier to communicate that MAP prior. Moreover, we use conjugate representations such that the library performs operating characteristics with high-precision and high-speed (no more tables of type I error/power, but graphs!). So you see, RBesT does the job for you for the problem to forge a prior and then evaluate it before using it. This library is a huge help for our statisticians at Novartis to apply the robust MAP approach in clinical trials.

Here are the vignettes:

Getting started with RBesT (binary)

RBest for a Normal Endpoint

Using RBesT to reproduce Schmidli et al. “Robust MAP Priors”

Customizing RBesT plots

4 thoughts on “Hey—here are some tools in R and Stan to designing more effective clinical trials! How cool is that?

    • Thanks Ben – I was just coming here to add that link!

      trialr is my cookbook of Bayesian clinical trial designs implemented in Stan. It is small right now but I plan to grow it over time. I presented trialr at the International Society for Clinical Biostatistics (ISCB) Conference last week. I hope that by cataloguing trial designs I will increase the usage of both Stan and Bayesian stats in general in clinical trials.

      I love Stan and I make frequent use of rstantools, so thanks for publishing that.

      Kristian

  1. I love reading this level of model description: they have so much clarity. I’m drawn to descriptions of groups of equations and how they are constrained to create a specific response character (which we can then argue is accurate under the presumptions contained in the equations). It’s much more interesting to me than, for example, pulling apart some economic terms strung together from which a relation is extracted and then applied backwards and/or forwards to test for predictive power. I haven’t thought this through, but a group which generates a minimum is going to miss and to miss in various dimensions, so if you take raw n then you may add some factor to reach yet another level of assurance – black swan set aside in this case – but that’s assuming raw n and so if you look at the quality of n, which in medical trials is very important, then you could be wrong enough without realizing you’re wrong, unless you then apply this kind analysis across a field parameterized using each of the measured internal variabilities of n, assuming those measures are recognizably accurate. Without looking at any code, any group of equations so aimed is going to have the issues inherent in groups, such as ‘count of n’ and ‘variability within n’ so the labeling in the ‘count of n’ is affected. It’s hard to contain 2nd order iterative variability, which is of course where the black swan idea occurs. E.g., as the variability extends, that which can’t be labeled reduces to something which can be counted in the contex you are counting. This is a simple extension of Cantor combined with Goedel; you diagonalize the statements of incompleteness not to prove incompleteness but to show that incompleteness reduces to a statement that can be counted and is counted in the exact manner of the number line, meaning that which clearly counts and which is clearly countably infinite and uncountable infinite. I could say context renormalizes but then I’d have to explain the inherent description of context and why it renormalizes, which requires understanding the inherent characteristics of layers of contexts (that are layered within contexts and which include contexts) and describing both their inherent existence states but also the inherent process statements, which leads all way back to understanding the ancient paradoxes of motion, and really it gets to be too much.

    Well, anyway, I love reading model descriptions. And I love the ideas behind bootstrapping or robustifying or whatever you call adding data so the process might, maybe should pick something other than your points as mapped. I know you’ve talked about parameterization a lot. It is really interesting: how do you know your assumptive space is actually set out properly, that the points you’re using are not biased – like a mentalist’s illusions can appear to be out of the blue when there is absolute clarity not visible to you (another reduction of the unknown statement) – or that you’ve biased the algorithm. We always bias the algorithm because any selection is a choice but the closer one gets to determining whether a specific effect is real the more important that bias becomes, which is true not only for small effects but large ones as well. Somewhat off track but interesting: for very large effect, consider the difference between predators and non-predators; there is a binary type switch which identifies ‘that which is not growing out of the earth’ up through things Iike ‘those that move around unattached’ to ‘those not of your kin’ so this group of equations generates the answer ‘eat that’. You can point out that eating stuff goes down to bacterial levels and you can even say molecules eat just as black holes eat, but that’s making the point that the binary relation of ‘eat’ and ‘not eat’ extends through creation. Note that mathematically this is merely a statement that the set of equations which generates ‘eat’ must be accompanied by a set of equations that generates ‘not eat’ because there must a context in which you’re counting at least one or the other. That result can be achieved by diagonalizing too, but again it spirals into much more complicated stuff, including how contexts count in coordinate planes across an implied Riemann-style zeta axis in which you can see the complexity develop as you count ‘distance’, all the way to the stuff that really interests me these days, which is the issue of directionality in layered contexts within layered context. This involves a lot of rotations that involve tricky perspectives, as for example the distortions that occur as you hold a perspective line or as you generate one and how that idealizes at each implied counting point – meaning scale determinability – and then how you can compare these forms with idealizations so you can rip stuff apart and calculate and comprehend within rational schema like graphs and algorithmic steps. I find manipulating the idealizations extremely taxing even using little pictures to keep track; the complexity is like figuring j potential without knowing the answer is what you observe as j.

    Enough for today. I’m trying to state a simple concept today, something like vitality – which I’m arguing in my head is a rational proxy for energy contained so I can use me as the model for the groups of equations that generate or don’t generate my vitality level (as I appreciate it, as I appreciate all the many layered orders of what contributes to my working definition of vitality). The point is to see if I can do some pure Bayesian thinking: can I create a prior which generates a better posterior which then creates a better prior, more or less, for the next iterations when the first prior in that chain is generated by a model separate from the ‘energy contained’ label vitality. That is, I can state this as evolution being a count of results measured in Darwin’s context, meaning simply that the ‘target’ appears within a model that has no target but which generates targets at each layer as these layers reduce to layers that count within contexts. These can be immaterial, such as the flutter of a butterfly, or material, like the need for sustenance and they rise to existential levels, whether that’s pure relative context – as in, you mean nothing to me! – or a momentary statement or expression of existence – like a pulsar or supernova across a million light years, meaning space-time ‘away’ or ‘near’ – or the processes that matter in that context – like ‘Mongo just pawn in great game of life’. I can state this as a field, in various existence and process representations, and have lots of examples of what it explains but I get really stuck on a bunch of applications. I’m having trouble visualizing certain spatial transformations, such as ‘when an x-y plane generates, how many model layers are visible in that plane in the ideal’. I know how that converts into a value in a count that shifts across the field, meaning across the unknown, meaning for someone who’s really into deep mathematics, the gap over any count in the real line, but I have trouble with the overlapping squares and circles in the ideal drawings, let alone with when the field consists of multiple iterations of these layered fields because it’s super hard to see into the gaps and valleys – into the near field – while still seeing the terrain that encompasses the larger field. This is true even when I assume the larger field, which extends through diagonalized incompleteness, has the exact same contour and that – almost wrote THAT – is why I’m working on directionality and in particular it’s relationship to inherent attractive and repulsive model forces. It’s the most fun I’ve had in my life, which has been pretty long: I’m actually considering the ‘model values’ of relative groups in an unspecified field with known, describable processes, all fully idealized and labeled for manipulation, where ‘model values’ literally takes on meaning. In other words, an investigation of how meaning occurs, from the iota to the fully comprehensive and how these count as x-y coordinate planes along a z axis, and – much cooler but much harder – how each coordinate plane’s count is treated as made up of the counts of these coordinate planes. Out of simple depictions using an idealized axis in which z is not visible, you see how meaning occurs when there was none and how it exists over time and how it moves in relation to other ‘meanings’ and how we can talk about ‘when’. Where I’m at right now is difficulty comprehending – and I mean that in the set theoretical sense I have to create a comprehensible set – directional valuations at the peripheries of any given context into that context’s unknown. I’m very hard on myself and find it difficult to accept my conclusions – again, so what am I not accepting?’ and ‘what am I accepting that I don’t see?’, which you recognize is the same form of statement methodology as I’ve been using – though I can fully state them. You can see, I hope, why working with a reduction concept is just a way of declaring a variable and running it through various declared functions to see which simulations generate a better result that is then taken as the new variable – seen also as a vector of matrices or as an ordered array, etc. – and how this process generates generates separate complex iterations that can be compared in a number of ways including by a set composed of ‘best’ (which because it is a set of fields is itself relative to its own existence, meaning even at the level of ‘set of best’ then relative change will affect ordering and existence in the counting of ‘set of best’. Same field of fields stuff. Same complexities. I know the answer but saying it at that level of expression is tricky.

  2. Dear all,
    thanks for your very useful post, since you are talking about stan,
    is there is a quick and tool that can implement the work of “A Bayesian Phase I/II Trial Design for Immunotherapy” by, Suyu Liu, rather than spend more than 21 hours to get the simulation.

    he proposed a Bayesian phase I/II dose-finding design that incorporates the unique features of immunotherapy by simultaneously considering three outcomes: immune response, toxicity, and efficacy. The objective is to identify the biologically optimal dose, defined as the dose with the highest desirability in the risk-benefit tradeoff. An Emax model is utilized to describe the marginal distribution of the immune response. Conditional on the immune response, we jointly model toxicity and efficacy using a latent variable approach.

    I want after implementing this work using stan to select the best dose in the first stage and to continue in a second stage with only 2 arms and the Best dose.

    Yuan’s work:

    https://www.tandfonline.com/doi/suppl/10.1080/01621459.2017.1383260?scroll=top

    the code attached there is not complete, I run it for 50 iterations, it took 15 hours and I got two txt file (attached), but I do not know how he produces the results from these files,
    he start with doses= c(.1,.3,.5,.7,.9)
    and
    #Utility
    Uti <- array(0,c(2,3,2)) # order: tox, eff, immuno
    Uti[,,1] <- matrix(c(0,0,50,10,80,35),nrow=2)
    Uti[,,2] <- matrix(c(5,0,70,20,100,45),nrow=2)
    with oh.size=3 and N = 60

    how I can produces the figures and tables 2, Could anyone help, please?

Leave a Reply

Your email address will not be published. Required fields are marked *