Skip to content
Archive of entries posted by

Stan Weekly Roundup, 16 June 2017

We’re going to be providing weekly updates for what’s going on behind the scenes with Stan. Of course, it’s not really behind the scenes, because the relevant discussions are at stan-dev GitHub organization: this is the home of all of our source repos; design discussions are on the Stan Wiki Stan Discourse Groups: this is […]

Hello, world! Stan, PyMC3, and Edward

Being a computer scientist, I like to see “Hello, world!” examples of programming languages. Here, I’m going to run down how Stan, PyMC3 and Edward tackle a simple linear regression problem with a couple of predictors. No, I’m not going to take sides—I’m on a fact-finding mission. We (the Stan development team) have been trying […]

Design top down, Code bottom up

Top-down design means designing from the client application programmer interface (API) down to the code. The API lays out a precise functional specification, which says what the code will do, not how it will do it. Coding bottom up means coding the lowest-level foundations first, testing them, then continuing to build. Sometimes this requires dropping […]

Fitting hierarchical GLMs in package X is like driving car Y

Given that Andrew started the Gremlin theme, I thought it would only be fitting to link to the following amusing blog post: Chris Brown: Choosing R packages for mixed effects modelling based on the car you drive (on the seascape models blog) It’s exactly what it says on the tin. I won’t spoil the punchline, […]

Bayesian Posteriors are Calibrated by Definition

Time to get positive. I was asking Andrew whether it’s true that I have the right coverage in Bayesian posterior intervals if I generate the parameters from the prior and the data from the parameters. He replied that yes indeed that is true, and directed me to: Cook, S.R., Gelman, A. and Rubin, D.B. 2006. […]

Ensemble Methods are Doomed to Fail in High Dimensions

Ensemble methods [cat picture] By ensemble methods, I (Bob, not Andrew) mean approaches that scatter points in parameter space and then make moves by inteprolating or extrapolating among subsets of them. Two prominent examples are: Ter Braak’s differential evolution   Goodman and Weare’s walkers There are extensions and computer implementations of these algorithms. For example, […]

A fistful of Stan case studies: divergences and bias, identifying mixtures, and weakly informative priors

Following on from his talk at StanCon, Michael Betancourt just wrote three Stan case studies, all of which are must reads: Diagnosing Biased Inference with Divergences: This case study discusses the subtleties of accurate Markov chain Monte Carlo estimation and how divergences can be used to identify biased estimation in practice.   Identifying Bayesian Mixture […]

Stan Language Design History

Andrew’s proposal At our last Stan meeting, Andrew proposed allowing priors to be defined for parameters near where they are declared, as in: parameters { real mu; mu ~ normal(0, 1); real sigma; sigma ~ lognormal(0, 1); … I can see the pros and cons. The pro is that it’s easier to line things up […]

HMMs in Stan? Absolutely!

I was having a conversation with Andrew that went like this yesterday: Andrew: Hey, someone’s giving a talk today on HMMs (that someone was Yang Chen, who was giving a talk based on her JASA paper Analyzing single-molecule protein transportation experiments via hierarchical hidden Markov models). Maybe we should add some specialized discrete modules to […]

Stan JSS paper out: “Stan: A probabilistic programming language”

As a surprise welcome to 2017, our paper on how the Stan language works along with an overview of how the MCMC and optimization algorithms work hit the stands this week. Bob Carpenter, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: […]

Stan 2.14 released for R and Python; fixes bug with sampler

Stan 2.14 is out and it fixes the sampler bug in Stan versions 2.10 through 2.13. Critical update It’s critical to update to Stan 2.14. See: RStan 2.14.1 PyStan 2.14.0.0 CmdStan 2.14.0 The other interfaces will update when you udpate CmdStan. The process After Michael Betancourt diagnosed the bug, it didn’t take long for him […]

How to include formulas (LaTeX) and code blocks in WordPress posts and replies

It’s possible to include LaTeX formulas like . I entered it as $latex \int e^x \, \mathrm{d}x$. You can also generate code blocks like this for (n in 1:N) y[n] ~ normal(0, 1); The way to format them is to use <pre> to open the code block and </pre> to close it. You can create […]

Michael found the bug in Stan’s new sampler

Gotcha! Michael found the bug! That was a lot of effort, during which time he produced ten pages of dense LaTeX to help Daniel and me understand the algorithm enough to help debug (we’re trying to write a bunch of these algorithmic details up for a more general audience, so stay tuned). So what was […]

Stan 2.10 through Stan 2.13 produce biased samples

[Update: bug found! See the follow-up post, Michael found the bug in Stan’s new sampler] [Update: rolled in info from comments.] After all of our nagging of people to use samplers that produce unbiased samples, we are mortified to have to announce that Stan versions 2.10 through 2.13 produce biased samples. The issue Thanks to […]

Mathematica, now with Stan

Vincent Picaud developed a Mathematica interface to Stan: MathematicaStan You can find everything you need to get started by following the link above. If you have questions, comments, or suggestions, please let us know through the Stan user’s group or the GitHub issue tracker. MathematicaStan interfaces to Stan through a CmdStan process. Stan programs are […]

A book on RStan in Japanese: Bayesian Statistical Modeling Using Stan and R (Wonderful R, Volume 2)

Wonderful, indeed, to have an RStan book in Japanese: Kentarou Matsuura. 2016. Bayesian Statistical Modeling Using Stan and R. Wonderful R Series, Volume 2. Kyoritsu Shuppan Co., Ltd. Google translate makes the following of the description posted on Amazon Japan (linked from the title above): In recent years, understanding of the phenomenon by fitting a […]

Stan users group hits 2000 registrations

Of course, there are bound to be duplicate emails, dead emails, and people who picked up Stan, joined the list, and never came back. But still, that’s a lot of people who’ve expressed interest! It’s been an amazing ride that’s only going to get better as we learn more and continue to improve Stan’s speed […]

Who owns your code and text and who can use it legally? Copyright and licensing basics for open-source

I am not a lawyer (“IANAL” in web-speak); but even if I were, you should take this with a grain of salt (same way you take everything you hear from anyone). If you want the straight dope for U.S. law, see the U.S. government Copyright FAQ; it’s surprisingly clear for government legalese. What is copyrighted? […]

Free workshop on Stan for pharmacometrics (Paris, 22 September 2016); preceded by (non-free) three day course on Stan for pharmacometrics

So much for one post a day… Workshop: Stan for Pharmacometrics Day If you are interested in a free day of Stan for pharmacometrics in Paris on 22 September 2016, see the registration page: Stan for Pharmacometrics Day (free workshop) Julie Bertrand (statistical pharmacologist from Paris-Diderot and UCL) has finalized the program: When Who What […]

Stan Course up North (Anchorage, Alaska) 23–24 Aug 2016

Daniel Lee’s heading up to Anchorage, Alaska to teach a two-day Stan course at the Alaska chapter of the American Statistical Association (ASA) meeting in Anchorage. Here’s the rundown: Information and Free Registration I hear Alaska’s beautiful in the summer—16 hour days in August and high temps of 17 degrees celsius. Plus Stan! More Upcoming […]