Skip to content
Archive of entries posted by

Stan Weekly Roundup, 21 July 2017

It was another productive week in Stan land. The big news is that Jonathan Auerbach, Tim Jones, Susanna Makela, Swupnil Sahai, and Robin Winstanley won first place in a New York City competition for predicting elementary school enrollment. Jonathan told me, “I heard 192 entered, and there were 5 finalists….Of course, we used Stan (RStan […]

Animating a spinner using ggplot2 and ImageMagick

It’s Sunday, and I [Bob] am just sitting on the couch peacefully ggplotting to illustrate basic sample spaces using spinners (a trick I’m borrowing from Jim Albert’s book Curve Ball). There’s an underlying continuous outcome (i.e., where the spinner lands) and a quantization into a number of regions to produce a discrete outcome (e.g., “success” […]

Stan Weekly Roundup, 14 July 2017

Another week, another bunch of Stan updates. Kevin Van Horn and Elea McDonnell Feit put together a tutorial on Stan [GitHub link] that covers linear regression, multinomial logistic regression, and hierarchical multinomial logistic regression. Andrew has been working on writing up our “workflow”. That includes Chapter 1, Verse 1 of Bayesian Data Analysis of (1) […]

Stan Weekly Roundup, 7 July 2017

Holiday weekend, schmoliday weekend. Ben Goodrich and Jonah Gabry shipped RStan 2.16.2 (their numbering is a little beyond base Stan, which is at 2.16.0). This reintroduces error reporting that got lost in the 2.15 refactor, so please upgrade if you want to debug your Stan programs! Joe Haupt translated the JAGS examples in the second […]

Stan Weekly Roundup, 30 June 2017

Here’s some things that have been going on with Stan since the last week’s roundup Stan® and the logo were granted a U.S. Trademark Registration No. 5,222,891 and a U.S. Serial Number: 87,237,369, respectively. Hard to feel special when there were millions of products ahead of you. Trademarked names are case insensitive and they required […]

Stan®

Update: Usage guidelines See: Stan trademark usage guide. We basically just followed Apache’s lead. It’s official “Stan” is now a registered trademark. For those keeping score, it’s U.S. Trademark Registration No. 5,222,891 [USPTO] The Stan logo (see image below) is also official U.S. Trademark Serial No. #87,237,369 [USPTO] No idea why there are serial numbers […]

Stan Weekly Roundup, 23 June 2017

Lots of activity this week, as usual. * Lots of people got involved in pushing Stan 2.16 and interfaces out the door; Sean Talts got the math library, Stan library (that’s the language, inference algorithms, and interface infrastructure), and CmdStan out, while Allen Riddell got PyStan 2.16 out and Ben Goodrich and Jonah Gabry are […]

Stan Weekly Roundup, 16 June 2017

We’re going to be providing weekly updates for what’s going on behind the scenes with Stan. Of course, it’s not really behind the scenes, because the relevant discussions are at stan-dev GitHub organization: this is the home of all of our source repos; design discussions are on the Stan Wiki Stan Discourse Groups: this is […]

Hello, world! Stan, PyMC3, and Edward

Being a computer scientist, I like to see “Hello, world!” examples of programming languages. Here, I’m going to run down how Stan, PyMC3 and Edward tackle a simple linear regression problem with a couple of predictors. No, I’m not going to take sides—I’m on a fact-finding mission. We (the Stan development team) have been trying […]

Design top down, Code bottom up

Top-down design means designing from the client application programmer interface (API) down to the code. The API lays out a precise functional specification, which says what the code will do, not how it will do it. Coding bottom up means coding the lowest-level foundations first, testing them, then continuing to build. Sometimes this requires dropping […]

Fitting hierarchical GLMs in package X is like driving car Y

Given that Andrew started the Gremlin theme, I thought it would only be fitting to link to the following amusing blog post: Chris Brown: Choosing R packages for mixed effects modelling based on the car you drive (on the seascape models blog) It’s exactly what it says on the tin. I won’t spoil the punchline, […]

Bayesian Posteriors are Calibrated by Definition

Time to get positive. I was asking Andrew whether it’s true that I have the right coverage in Bayesian posterior intervals if I generate the parameters from the prior and the data from the parameters. He replied that yes indeed that is true, and directed me to: Cook, S.R., Gelman, A. and Rubin, D.B. 2006. […]

Ensemble Methods are Doomed to Fail in High Dimensions

Ensemble methods [cat picture] By ensemble methods, I (Bob, not Andrew) mean approaches that scatter points in parameter space and then make moves by inteprolating or extrapolating among subsets of them. Two prominent examples are: Ter Braak’s differential evolution   Goodman and Weare’s walkers There are extensions and computer implementations of these algorithms. For example, […]

A fistful of Stan case studies: divergences and bias, identifying mixtures, and weakly informative priors

Following on from his talk at StanCon, Michael Betancourt just wrote three Stan case studies, all of which are must reads: Diagnosing Biased Inference with Divergences: This case study discusses the subtleties of accurate Markov chain Monte Carlo estimation and how divergences can be used to identify biased estimation in practice.   Identifying Bayesian Mixture […]

Stan Language Design History

Andrew’s proposal At our last Stan meeting, Andrew proposed allowing priors to be defined for parameters near where they are declared, as in: parameters { real mu; mu ~ normal(0, 1); real sigma; sigma ~ lognormal(0, 1); … I can see the pros and cons. The pro is that it’s easier to line things up […]

HMMs in Stan? Absolutely!

I was having a conversation with Andrew that went like this yesterday: Andrew: Hey, someone’s giving a talk today on HMMs (that someone was Yang Chen, who was giving a talk based on her JASA paper Analyzing single-molecule protein transportation experiments via hierarchical hidden Markov models). Maybe we should add some specialized discrete modules to […]

Stan JSS paper out: “Stan: A probabilistic programming language”

As a surprise welcome to 2017, our paper on how the Stan language works along with an overview of how the MCMC and optimization algorithms work hit the stands this week. Bob Carpenter, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: […]

Stan 2.14 released for R and Python; fixes bug with sampler

Stan 2.14 is out and it fixes the sampler bug in Stan versions 2.10 through 2.13. Critical update It’s critical to update to Stan 2.14. See: RStan 2.14.1 PyStan 2.14.0.0 CmdStan 2.14.0 The other interfaces will update when you udpate CmdStan. The process After Michael Betancourt diagnosed the bug, it didn’t take long for him […]

How to include formulas (LaTeX) and code blocks in WordPress posts and replies

It’s possible to include LaTeX formulas like . I entered it as $latex \int e^x \, \mathrm{d}x$. You can also generate code blocks like this for (n in 1:N) y[n] ~ normal(0, 1); The way to format them is to use <pre> to open the code block and </pre> to close it. You can create […]

Michael found the bug in Stan’s new sampler

Gotcha! Michael found the bug! That was a lot of effort, during which time he produced ten pages of dense LaTeX to help Daniel and me understand the algorithm enough to help debug (we’re trying to write a bunch of these algorithmic details up for a more general audience, so stay tuned). So what was […]