Sebastian Ueckert and France Mentré are swinging by to visit the Stan team at Columbia and Sebastian’s presenting the following talk, to which everyone is invited.
Improved confidence intervals and p-values by sampling from the normalized likelihood
Sebastian Ueckert (1,2), Marie-Karelle Riviere (1), France Mentré (1)
(1) IAME, UMR 1137, INSERM and University Paris Diderot, Paris, France; (2) Pharmacometrics Research Group, Department of Pharmaceutical Biosciences, Uppsala University, Uppsala, Sweden
10:30 AM, Thursday 8 October
1025 School of Social Work Building (CUSSW); 1255 Amsterdam Ave (122 St & Amsterdam)
Asymptotic theory-based statistics such as confidence intervals (CI) and p-values (PVAL) are the basis for most model-driven decisions in drug development. For small sample sizes these approximations do not hold and resampling methods are employed. Sampling from the normalized likelihood function represents an alternative, which with the development of Hamiltonian Monte-Carlo (HMC) methods becomes computationally attractive. In this presentation the results of a comparison between HMC-based sampling and existing approaches for the calculation of CI and PVAL is presented.
The comparison was performed with a simulation study using a one-compartment model and different study sizes. For CI, evaluation was based on runtime, median CI and coverage, and in comparison to CI obtained via covariance matrix, log-likelihood profiling and non-parametric bootstrap. For PVAL, the evaluation was based on runtime, type-I error and power, and in comparison to PVAL obtained via Wald test, log-likelihood ratio test and permutation test. The HMC-based methods were implemented using S with improper or uniform priors for sampling. All asymptotic theory and resampling-based results were obtained in NONMEM 7.3.
The simulations showed good agreement between approaches for large sample sizes and increasing differences for smaller sample sizes. In contrast to most other methods, HMC showed nominal coverage and type-I error at all study sizes. In terms of computation time the HMC-based methods were between 10 and 60 times faster than resampling methods.
In conclusion, CI and PVAL through sampling from the normalized likelihood using HMC yielded results with good theoretical properties at a drastically shorter runtime than resampling methods.
Is a “normalized likelihood” what Bayesians would call a “posterior distribution with flat prior”?
Good question.
Yes, exactly. That’s how we implemented it. We were struggling to find an appropriate name and first presented it as “sampling from the posterior with flat prior” however we felt that this created the confusion whether this is frequentist or Bayesian. Therefore we changed to “normalized likelihood”.
Not sure if your model has nuisance parameters; if so, then this dude Severini has done related work on integrated likelihood that might be of interest to you.
How would the coverage properties of *parametric* (not nonparametric) bootstrapping CIs compare (I’m sure they would be computationally much slower than Stan) ?
Good question. We did not look into this yet, but indeed based on the results of other work (http://page-meeting.org/default.asp?abstract=2688) one could expect the parametric bootstrap to perform better. (You’re certainly right for the runtime)