The Stan Development Team is happy to announce that Stan 1.3.0 and RStan 1.3.0 are available for download. Follow the links on:

- Stan home page: http://mc-stan.org/

Please let us know if you have problems updating.

Here’s the full set of release notes.

v1.3.0 (12 April 2013) ====================================================================== Enhancements ---------------------------------- Modeling Language * forward sampling (random draws from distributions) in generated quantities * better error messages in parser * new distributions: + exp_mod_normal + gumbel + skew_normal * new special functions: + owenst * new broadcast (repetition) functions for vectors, arrays, matrices + rep_arrray + rep_matrix + rep_row_vector + rep_vector Command-Line * added option to display autocorrelations in the command-line program to print output * changed default point estimation routine from the command line to use Nesterov's accelerated gradient method, added option for point estimation with Newton's method RStan * added method as.mcmc.list() * compatibility with R 3.0.0 C++/Internal * refactored math/agrad libs in C++ to separate files/includes, remove redundant code, more unit tests for existing code * added chainable_alloc class for caching solver results * generalized VectorView with seq_view * templated out generated code for efficient double-only operation on model log probs w/o gradients Doc * additions to user's guide w. sample models + stochastic volatility example with source, optimized source, simulation + time series, moving average, standardization for linear regression, hidden Markov models, with examples * manual's index is now hyperlinked * added additional acknowledgements to manual * added full description of differences between sampling statement and lp__ * fixed general normal mixture model example Testing * split unit tests from distribution tests Bug Fixes ---------------------------------- * fixed derivative in multi_normal_prec distribution function * double-based log_prob functions return the same value as var-based log_prob_grad functions * calls to lgamma are now using boost's lgamma function * patched transform to work with Eigen 3.2 beta * all probability distribution functions and cumulative distribution functions behave properly with 0 length vector arguments * fixed error in definition of hypergeometric pmf * fixed arguments to nesterov optimization ctor in command * fixed issue with initialization matrices being read improperly * Use fabs() instead of abs() in unit_vector_constrain. * typos in the manual * rstan: + fixed crash in R when index is out of bounds using set_cppo("fast") + io_context fix skipping len=0 + fix the typo in manual (dims -> dim) + add require(inline) to fix the problem with loading sysdata.rda

We have to work on our public relations and not release all our exciting news on late Friday afternoon!

Are you kidding? (That wasn’t rhetorical, it’s hard to tell in writing.)

Our strategy’s been to release as soon as the release is ready. Would you prefer we hold the release and/or announcement until a more auspicious time for PR? What do you think that is? At your request, I’ve been avoiding posting in the A.M. so as not to push down your regularly scheduled posts.

Should we get our own blog for Stan so posts like this aren’t buried by the deluge of your regularly scheduled posts? I like that these announcements go out to so many readers. But then maybe they’re all just getting annoyed (perhaps like the next commenter depending on how one interprets the smiley emoticon).

Bob:

Yup, I was just being silly. I think it’s fine to post Stan updates whenever they arise.

Your release cycles are waaaay too fast. 1.2.0 to 1.3.0 in just one month? :)

You can call it 0.0.7 if you’d prefer. It’s our seventh release.

I’m afraid to tell you that 1.4.0 is already in the works on the hmc_refactor branch. We’ve been doing a major overhaul of the basic infrastructure to make it easier to code against going forward.

Speaking of which, 2.0.0 is also queued up, with Riemann Manifold HMC (and associated auto-diff extensions), adaptive Metropolis, and some ensemble samplers (differential evolution, DREAM, Goodman-Weare walkers), which will in turn require some rethinking of our basic command-line and RStan argument structures. We’re also going to provide some runtime-based config options instead of number-of-iterations configurations.

This leaves my head spinning, though my heart is glad. One thing I ran into with the update was forgetting to exit R (which is always running on my laptop) after compiling the new stan and the new rstan. You really do have to exit or R gets confused and you get obscure C++ compilation errors in your `stan` call.

Congrats and thanks!

Do users manage to keep up with your fast release cycle? Is there a risk of “update fatigue”?

If an update includes things that I have been needing/wanting/hoping for then I am happy to get it as soon as it’s available. Even in a really extreme case, if there were a new version every week for the next five weeks, and each one addressed another one of my current top-5 desires, I would upgrade every week and would prefer to do that than to wait five weeks for one improvement that wrapped it all up.

But once the main things that affect me are resolved, I will probably start ignoring updates and just do them every six months or so. That’s what I do with R, for example.

Basically, I think that if people don’t want to upgrade so often, that’s fine, nobody is making them. Those of us who are still seeing (or at least hoping for) specific major improvements appreciate the rapid cycle.