R packages interfacing with Stan: brms

Over on the Stan users mailing list I (Jonah) recently posted about our new document providing guidelines for developing R packages interfacing with Stan. As I say in the post and guidelines, we (the Stan team) are excited to see the emergence of some very cool packages developed by our users. One of these packages is Paul Bürkner’s brms. Paul is currently working on his PhD in statistics at the University of Münster, having previously studied psychology and mathematics at the universities of Münster and Hagen (Germany). Here is Paul writing about brms:

The R package brms implements a wide variety of Bayesian regression models using extended lme4 formula syntax and Stan for the model fitting. It has been on CRAN for about one and a half years now and has grown to be probably one of the most flexible R packages when it comes to regression models.

A wide range of distributions are supported, allowing users to fit — among others — linear, robust linear, count data, response time, survival, ordinal, and zero-inflated models. You can incorporate multilevel structures, smooth terms, autocorrelation, as well as measurement error in predictor variables to mention only a few key features. Furthermore, non-linear predictor terms can be specified similar to how it is done in the nlme package and on top of that all parameters of the response distribution can be predicted at the same time.

After model fitting, you have many post-processing and plotting methods to choose from. For instance, you can investigate and compare model fit using leave-one-out cross-validation and posterior predictive checks or predict responses for new data.

If you are interested and want to learn more about brms, please use the following links:

  • GitHub repository (for source code, bug reports, feature requests)
  • CRAN website (for vignettes with guidance on how to use the package)
  • Wayne Folta’s blog posts (for interesting brms examples)

Also, a paper about brms will be published soon in the Journal of Statistical Software.

My thanks goes to the Stan Development Team for creating Stan, which is probably the most powerful and flexible tool for performing Bayesian inference, and for allowing me to introduce brms here at this blog.

9 thoughts on “R packages interfacing with Stan: brms

    • I agree, Paul deserves a prize, or at least praise for brms. It does so much more tha listed here, for example meta-analysis, unequal variances, hurdle models, etc.

    • I second shravan and Donald here. Paul has done exceptional work in building out BRMS over the last few releases and he has been extraordinarily responsive on the BRMS github to requests and questions. Paul’s work on BRMS (and the rstanarm team’s work) has made it easier and easier to recommend Bayesian analyses to colleagues who wouldn’t be willing to do it otherwise.

    • Stan is an incredible piece of work, but it is brms (and rstanarm to a degree) that really makes Bayesian inference in a regression context available to the masses.

      For beginners, brms is so easy to get started with, and learning is more fun and effective when you can actually estimate the models taught in Stats classes. For more advanced applied users, brms is so flexible that it makes implementing multiple models really fast, which then of course ends up saving a lot of time. Also, the brms R package is very well written, produces good Stan code, and the helper functions (such as obtaining regression lines with Credible Intervals and interfacing with bayesplot–another great package) are fantastic.

      So yeah, I second Shravan’s suggestion that Paul deserves a prize for his work.

  1. Completely agree that brms is fantastic. I’m new to Stan but brms has really helped in this introduction. Not to mention that when I emailed Paul with a question he replied within five minutes with an incredibly lucid answer.

  2. I’ve switched almost finished projects to brms, because it’s such a great package and using anything else felt like a chore afterwards.
    I prefer brms to rstanarm, because I’ve been able to fit some large models that did not work out in rstanarm (ie. didn’t converge), but also because it feels like I can learn one package’s interfaces and extend my formulae as needed (e.g. using splines). This is great for the learning curve and I feel like I’ve made leaps and bounds compared to earlier progress.
    I can vouch that Paul is extremely responsive to questions and bug reports, often fixing them within minutes.

    • I feel like I should amend this comment:
      a) “using anything else felt like a chore” I realised afterwards that this might also be taken to imply rstanarm, but I was strictly talking about the tools I used before (lme4, gamm4, MCMCglmm, JAGS, etc.).
      b) I can now also vouch that the rstanarm authors react as fast as Paul (as I’m now re-running my large models in new rstanarm versions at their behest and finding that they now start sampling, where they stopped due to Cholmod error: problem too large before).
      c) even if rstanarm works, I like a lot of brms’ niceties
      – marginal_effects
      – I learnt one call, and now I can add/remove group-level effects, splines at will, without looking up a different function.

      Because my models usually take a cluster to fit, I don’t mind the compilation time. Other people’s mileage may vary :-)

Leave a Reply to AnonAnon Cancel reply

Your email address will not be published. Required fields are marked *