Stan and RStan 1.1.0

Stan Logo We’re happy to announce the availability of Stan and RStan versions 1.1.0, which are general tools for performing model-based Bayesian inference using the no-U-turn sampler, an adaptive form of Hamiltonian Monte Carlo. Information on downloading and installing and using them is available as always from

Stan Home Page: http://mc-stan.org/

Let us know if you have any problems on the mailing lists or at the e-mails linked on the home page (please don’t use this web page). The full release notes follow.

(R)Stan Version 1.1.0 Release Notes
===================================
-- Backward Compatibility Issue
   * Categorical distribution recoded to match documentation;  it
     now has support {1,...,K} rather than {0,...,K-1}.  
   * (RStan) change default value of permuted flag from FALSE to TRUE for
     Stan fit S4 extract() method
-- New Features
   * Conditional (if-then-else) statements
   * While statements
-- New Functions
   * generalized multiply_lower_tri_self_transpose() to non-square
     matrices
   * special functions: log_inv_logit(), log1m_inv_logit()
   * matrix special functions: cumulative_sum()
   * probability functions: poisson_log_log() for log-rate 
     parameterized Poisson
   * matrix functions: block(), diag_pre_multiply(), diag_post_multiply()
   * comparison operators (<, >, <=, >=, ==, !=)
   * boolean operators (!, ||, &&)
   * allow +/- inf values in variable declaration constraints
-- RStan Improvements
   * get_posterior_mean() method for Stan fit objects
   * replaced RcppEigen dependency with include of Eigen source
   * added read_stan_csv() to create Stan fit object from CSV files of
     the form written to disk by the command-line version of Stan
   * as.data.frame() S3 method for Stan fit objects
-- Bug Fixes
   * fixed bug in NUTS diagonal resulting in too small step sizes
   * fixed bug introduced in 1.0.3 that hid line and column number
     bug reporting
   * added checks that data dimensions match as well as sizes
   * removed non-symmetric versions of eigenvalues() and eigenvectors()
   * testing identifiers are not reserved words in C++/Stan
   * trapping/reporting locations of errors in data and init reads
   * improvements in dump data format reader for more R compatibility
     and more generality
   * fix bug in bernoulli logit distro tail density
-- Code Improvements
   * templated out matrix libs to reduce code duplication
   * vectorized auto-dif for tcrossprod() and crossprod()
   * optimizations in Wishart
   * vectorization with efficiency improvements in probability distributions
-- Libraries Updated
   * Eigen version 3.1.1 replaced with version 3.1.2
   * Boost version 1.51.0 replaced with version 1.52.0
-- Manual Improvements
   * New chapter on univariate and multivariate variable transforms
   * Many consistency improvements and typo corrections
   * Information on running command line in parallel from shell

3 thoughts on “Stan and RStan 1.1.0

  1. I didn’t see a direct update on this in the documentation or google groups; did the speed comparisons to block-gibbs for medium sized linear mixed and logit or probit mixed models end up resolving in stan’s favor with the updates?

  2. We’re not doing anything specific for those models or really for any specific models.

    We are gradually vectorizing the probability density functions (this is just for expressive power) and unfolding their gradient log density calculations (this is where the efficiency comes from as the gradients are where most of the time in Stan is spent). This is more an implementation issue than anything else, but can lead to order of magnitude or more speedups.

    Inverse logit is faster than the cumulative normal, so so with lots of data, that could become a factor.

    Theoretically speaking, Gibbs should do better than NUTS (or other HMC variations) when the model’s conjugate, so you can do proper Gibbs rather than a slice-sampled approximation, there is low correlation among the blocked posterior parameters.

    NUTS/HMC will do better when there is high correlation among the parameters in the posterior or when the lack of conjugacy makes the Gibbs update slow mixing or slow to compute.

    Practically speaking, Stan implements HMC more efficiently than BUGS/JAGS implement Gibbs because it’s compiled rather than interpreted, but those systems are still a bit faster than Stan for some of the simple conjugate models in the BUGS examples.

    I’d love to see comparative results in practical cases.

    • Well, GLMM Gibbs can have very high autocorrelation even with a conjugate structure because the variance component – random effect – latent liability conditional update steps can be quite sticky. The top level parameters (variance components) can of course be highly correlated conditional on the data. My understanding was that these kind of random coefficent models were one of the motivations for developing stan. I’ve seen order of magnitude implementation issues tossed around a number of times (JAGS, BUGS, and MCMCglmm generate notable divergences just on implementation), so yes, practical test case timings would be good so that outsiders can know when the startup cost of learning new software is worth it.

Comments are closed.