Stan (quietly) passes 512 people on the users list

Stan is alive and well. We’re up to 523 people on the users list. [We’re sure there are many more than 523 actual users, since it’s easy to download and use Stan directly without joining the list.]

We’re working on a v2.1.0 release now and we hope to release it in within the next couple of weeks.

11 thoughts on “Stan (quietly) passes 512 people on the users list

  1. Wow. I just approved a spate of new users group applicants. I hope everyone feels free to post. We’re trying to be helpful on our mailing lists and also trying to build up Stan expertise to the point where our users can start helping each other.

    P.S. In case it’s not obvious, the significance of 512 is that it’s a round number in base 2 (1000000000) and base 16 (200), and hence more meaningful than finger-based measures to a computer scientist.

  2. Congrats to all the stan team!

    P.S. I finally upgraded to Stan 2.0 today and have been eagerly testing it with PyStan. I’m looking forward to debugging using the improved error reporting!

      • A comment for anyone who reads the abstract but not the paper: the authors present versions of their idea that use gradient and Hessian information, which makes it a natural fit for a software system that’s already implemented automatic differentiation.

  3. Andrew has long noted that I am easily surprised, and here I am again, surprised that there are “only” 512 Stan users. Does this mean there are only 512 in the world fitting Bayesian hierarchical models? That seems very unlikely. I think people must be fitting such models using tools other than Stan. But why would they do that? Sure, I realize there are people out there who already have models in BUGS and they keep on modifying them; I wouldn’t expect the use of other tools to drop to zero overnight. But this seems like such distressingly slow market penetration…I just don’t understand it at all. I’m surprised!

    Stan is great.

  4. I’m really excited about using Stan’s automatic differentiation for optimization.

    My understanding is that BFGS is currently the only option that is currently supported.

    I was hoping to get two other options onto your radar so you could think about adding support for them at some point in the future.

    * Stochastic Gradient Descent is an extremely quick way to get unbiased estimates of the log-likelihood gradient. Sometimes the second derivative isn’t even needed, especially early in the optimization process.

    * “Hessian-free” second-order methods, which combine some of the best properties of Newton-type methods and Conjugate Gradients, as described here: http://machinelearning.wustl.edu/mlpapers/paper_files/icml2010_Martens10.pdf

    Thanks to all of the developers for all your hard work on Stan!

Comments are closed.