We told Will Farr, a professor at the University of Birmingham who is part of the recent headline-grabbing experiment that corroborated Einstein’s theory of general relativity, that we blurbed Stan’s involvement in that project, and Farr wrote:

We used PyStan pretty extensively in the rates group—we have some simple analytic posteriors for the rates stuff, but with constrained variables (rates are positive, etc), and wanted to be able to flip various switches easily to explore different effects in the analysis (what about a different prior? what about if we include more calibration uncertainty? what if we change the model to have a third class of event? etc). It was really nice to just be able to write the model down, and push “go” to sample from it. Fast, too—it takes more time to compile the model than it does to generate a few x 10k samples (these are pretty low-dimensional models).

Anyway, I’m preaching to the choir, I guess. If you really push me to make a feature request: built-in N-dimensional GP kernels. I’ve been playing around a lot with GP priors on distributions, like what Foreman-Mackey did in http://arxiv.org/abs/1406.3020, and Stan samples them great (at least when you use the non-centred parameterisation trick to avoid Neal’s funnel), but it’s annoying to have to roll my own covariance function every time. (But the various positive-definite covariance matrix parameterisation is awesome for parameterising an arbitrary kernel metric, so Stan is already better than just about anything else for this!).

That’s what I’m talking about!

(But the various positive-definite covariance matrix parameterisation is awesome for parameterising an arbitrary kernel metric, so Stan is already better than just about anything else for this!).