Skip to content
Archive of posts filed under the Stan category.

Upcoming Stan-related talks

If you’re in NYC or Sidney, there are some Stan-related talks in the next few weeks.   New York 25 February. Jonah Gabry: shinyStan: a graphical user interface for exploring Bayesian models after MCMC. Register Now: New York Open Statistical Programming Meetup. 12 March. Rob Trangucci: #5: Non-centered parameterization aka the “Matt trick.” Register Now: Stan […]

VB-Stan: Black-box black-box variational Bayes

Alp Kucukelbir, Rajesh Ranganath, Dave Blei, and I write: We describe an automatic variational inference method for approximating the posterior of differentiable probability models. Automatic means that the statistician only needs to define a model; the method forms a variational approximation, computes gradients using automatic differentiation and approximates expectations via Monte Carlo integration. Stochastic gradient […]

Bayesian survival analysis with horseshoe priors—in Stan!

Tomi Peltola, Aki Havulinna, Veikko Salomaa, and Aki Vehtari write: This paper describes an application of Bayesian linear survival regression . . . We compare the Gaussian, Laplace and horseshoe shrinkage priors, and find that the last has the best predictive performance and shrinks strong predictors less than the others. . . . And here’s […]

Stan Down Under

I (Bob, not Andrew) am in Australia until April 30. I’ll be giving some Stan-related and some data annotation talks, several of which have yet to be concretely scheduled. I’ll keep this page updated with what I’ll be up to. All of the talks other than summer school will be open to the public (the […]

Stan 2.6.0 Released

We’re happy to announce the release of Stan 2.6, including RStan, PyStan, CmdStan; it will also work with the existing Stan.jl and MatlabStan. Although there is some new functionality (hence the minor version bump), this is primarily a maintenance release. It fixes all of the known memory issues with Stan 2.5.0 and improves overall speed […]

Cross-validation, LOO and WAIC for time series

This post is by Aki. Jonah asked in Stan users mailing list Suppose we have J groups and T time periods, so y[t,j] is the observed value of y at time t for group j. (We also have predictors x[t,j].) I’m wondering if WAIC is appropriate in this scenario assuming that our interest in predictive accuracy is for […]

Stan comes through . . . again!

Erikson Kaszubowski writes in: I missed your call for Stan research stories, but the recent post about stranded dolphins mentioned it again. When I read about the Crowdstorming project in your blog, I thought it would be a good project to apply my recent studies in Bayesian modeling. The project coordinators shared a big dataset […]

Artist needed!

We have some great ideas but none of us can draw. We need your help with designs and art for any or all of these projects: 1. “Gone Fishing” T-shirt A person is standing in a boat, fishing. The lake is full, not of fish but of little numbers: “.14″, “.31″, “.08″, etc etc. And […]

Planning my class for this semester: Thinking aloud about how to move toward active learning?

I’m teaching two classes this semester: – Design and Analysis of Sample Surveys (in the political science department, but the course has lots of statistics content); – Statistical Communication and Graphics (in the statistics department, but last time I taught it, many of the students were from other fields). I’ve taught both classes before. I […]

Expectation propagation as a way of life

Aki Vehtari, Pasi Jylänki, Christian Robert, Nicolas Chopin, John Cunningham, and I write: We revisit expectation propagation (EP) as a prototype for scalable algorithms that partition big datasets into many parts and analyze each part in parallel to perform inference of shared parameters. The algorithm should be particularly efficient for hierarchical models, for which the […]