Skip to content
Archive of posts filed under the Statistical computing category.

One simple trick to make Stan run faster

Did you know that Stan automatically runs in parallel (and caches compiled models) from R if you do this: source(“http://mc-stan.org/rstan/stan.R”) It’s from Stan core developer Ben Goodrich. This simple line of code has changed my life. A factor-of-4 speedup might not sound like much, but, believe me, it is!

Introducing shinyStan

As a project for Andrew’s Statistical Communication and Graphics graduate course at Columbia, a few of us (Michael Andreae, Yuanjun Gao, Dongying Song, and I) had the goal of giving RStan’s print and plot functions a makeover. We ended up getting a bit carried away and instead we designed a graphical user interface for interactively exploring virtually […]

VB-Stan: Black-box black-box variational Bayes

Alp Kucukelbir, Rajesh Ranganath, Dave Blei, and I write: We describe an automatic variational inference method for approximating the posterior of differentiable probability models. Automatic means that the statistician only needs to define a model; the method forms a variational approximation, computes gradients using automatic differentiation and approximates expectations via Monte Carlo integration. Stochastic gradient […]

Stan Down Under

I (Bob, not Andrew) am in Australia until April 30. I’ll be giving some Stan-related and some data annotation talks, several of which have yet to be concretely scheduled. I’ll keep this page updated with what I’ll be up to. All of the talks other than summer school will be open to the public (the […]

This has nothing to do with the Super Bowl

Joshua Vogelstein writes: The Open Connectome Project at Johns Hopkins University invites outstanding candidates to apply for a postdoctoral or assistant research scientist position in the area of statistical machine learning for big brain imaging data. Our workflow is tightly vertically integrated, ranging from raw data to theory to answering neuroscience questions and back again. […]

Six quick tips to improve your regression modeling

It’s Appendix A of ARM: A.1. Fit many models Think of a series of models, starting with the too-simple and continuing through to the hopelessly messy. Generally it’s a good idea to start simple. Or start complex if you’d like, but prepare to quickly drop things out and move to the simpler model to help […]

Github cheat sheet

Mike Betancourt pointed us to this page. Maybe it will be useful to you too.

Lewis Richardson, father of numerical weather prediction and of fractals

Lee Sechrest writes: If you get a chance, Wiki this guy: I [Sechrest] did and was gratifyingly reminded that I read some bits of his work in graduate school 60 years ago. Specifically, about his math models for predicting wars and his work on fractals to arrive at better estimates of the lengths of common […]

Stan comes through . . . again!

Erikson Kaszubowski writes in: I missed your call for Stan research stories, but the recent post about stranded dolphins mentioned it again. When I read about the Crowdstorming project in your blog, I thought it would be a good project to apply my recent studies in Bayesian modeling. The project coordinators shared a big dataset […]

Expectation propagation as a way of life

Aki Vehtari, Pasi Jylänki, Christian Robert, Nicolas Chopin, John Cunningham, and I write: We revisit expectation propagation (EP) as a prototype for scalable algorithms that partition big datasets into many parts and analyze each part in parallel to perform inference of shared parameters. The algorithm should be particularly efficient for hierarchical models, for which the […]