Skip to content
Archive of posts filed under the Stan category.

State-space modeling for poll aggregation . . . in Stan!

Peter Ellis writes: As part of familiarising myself with the Stan probabilistic programming language, I replicate Simon Jackman’s state space modelling with house effects of the 2007 Australian federal election. . . . It’s not quite the model that I’d use—indeed, Ellis writes, “I’m fairly new to Stan and I’m pretty sure my Stan programs […]

Big Data Needs Big Model

Big Data are messy data, available data not random samples, observational data not experiments, available data not measurements of underlying constructs of interest. To make relevant inferences from big data, we need to extrapolate from sample to population, from control to treatment group, and from measurements to latent variables. All these steps require modeling. At […]

How productized Bayesian revenue estimation with Stan

Markus Ojala writes: Bayesian modeling is becoming mainstream in many application areas. Applying it needs still a lot of knowledge about distributions and modeling techniques but the recent development in probabilistic programming languages have made it much more tractable. Stan is a promising language that suits single analysis cases well. With the improvements in approximation […]

We were measuring the speed of Stan incorrectly—it’s faster than we thought in some cases due to antithetical sampling

Aki points out that in cases of antithetical sampling, our effective sample size calculations were unduly truncated above at the number of iterations. It turns out the effective sample size can be greater than the number of iterations if the draws are anticorrelated. And all we really care about for speed is effective sample size […]

StanCon 2018 Helsinki, 29-31 August 2018

Photo (c) Visit Helsinki / Jussi Hellsten StanCon 2018 Asilomar was so much fun that we are organizing StanCon 2018 Helsinki August 29-31, 2018 at Aalto University, Helsinki, Finland (location chosen using antithetic sampling). Full information is available at StanCon 2018 Helsinki website Summary of the information What: One day of tutorials and two days […]

Static sensitivity analysis: Computing robustness of Bayesian inferences to the choice of hyperparameters

Ryan Giordano wrote: Last year at StanCon we talked about how you can differentiate under the integral to automatically calculate quantitative hyperparameter robustness for Bayesian posteriors. Since then, I’ve packaged the idea up into an R library that plays nice with Stan. You can install it from this github repo. I’m sure you’ll be pretty […]

StanCon 2018 Live Stream — bad news…. not enough bandwidth

Breaking news: no live stream. We’re recording, so we’ll put the videos online after the fact. We don’t have enough bandwidth to live stream today.       StanCon 2018 starts today! We’re going to try our best to live stream the event on YouTube. We have the same video setup as last year, but may […]

Three new domain-specific (embedded) languages with a Stan backend

One is an accident. Two is a coincidence. Three is a pattern. Perhaps it’s no coincidence that there are three new interfaces that use Stan’s C++ implementation of adaptive Hamiltonian Monte Carlo (currently an updated version of the no-U-turn sampler). ScalaStan embeds a Stan-like language in Scala. It’s a Scala package largely (if not entirely […]

StanCon is next week, Jan 10-12, 2018

It looks pretty cool! Wednesday, Jan 10 Invited Talk: Predictive information criteria in hierarchical Bayesian models for clustered data. Sophia Rabe-Hesketh and Daniel Furr (U California, Berkely) 10:40-11:30am Does the New York City Police Department rely on quotas? Jonathan Auerbach (Columbia U) 11:30-11:50am Bayesian estimation of mechanical elastic constants. Ben Bales, Brent Goodlet, Tresa Pollock, […]

“Handling Multiplicity in Neuroimaging through Bayesian Lenses with Hierarchical Modeling”

Donald Williams points us to this new paper by Gang Chen, Yaqiong Xiao, Paul Taylor, Tracy Riggins, Fengji Geng, Elizabeth Redcay, and Robert Cox: In neuroimaging, the multiplicity issue may sneak into data analysis through several channels . . . One widely recognized aspect of multiplicity, multiple testing, occurs when the investigator fits a separate […]

Workflow, baby, workflow

Bob Carpenter writes: Here’s what we do and what we recommend everyone else do: 1. code the model as straightforwardly as possible 2. generate fake data 3. make sure the program properly codes the model 4. run the program on real data 5. *If* the model is too slow, optimize *one step at a time* […]

StanCon2018: one month to go, schedule finalized, over 20 talks, 6 tutorials… and flights are cheap

StanCon2018 is shaping up nicely as a unique opportunity to immerse oneself in all things Stan, meet Stan developers and fellow users. Registration is still open, but spots are filling up fast. We’re at 130 registrants and counting! The draft schedule is now up. We have 16 accepted talks and 6 invited talks. Posters are […]

How not to compare the speed of Stan to something else

Someone’s wrong on the internet And I have to do something about it. Following on from Dan’s post on Barry Gibb statistical model evaluation, here’s an example inspired by a paper I found on Google Scholar searching for Stan citations. The paper (which there is no point in citing) concluded that JAGS was faster than […]

Computational and statistical issues with uniform interval priors

There are two anti-patterns* for prior specification in Stan programs that can be sourced directly to idioms developed for BUGS. One is the diffuse gamma priors that Andrew’s already written about at length. The second is interval-based priors. Which brings us to today’s post. Interval priors An interval prior is something like this in Stan […]

Stan is a probabilistic programming language

See here: Stan: A Probabilistic Programming Language. Journal of Statistical Software. (Bob Carpenter, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, Allen Riddell) And here: Stan is Turing Complete. So what? (Bob Carpenter) And, the pre-stan version: Fully Bayesian computing. (Jouni Kerman and Andrew Gelman) Apparently […]

Wine + Stan + Climate change = ?

Pablo Almaraz writes: Recently, I published a paper in the journal Climate Research in which I used RStan to conduct the statistical analyses: Almaraz P (2015) Bordeaux wine quality and climate fluctuations during the last century: changing temperatures and changing industry. Clim Res 64:187-199.

Custom Distribution Solutions

I (Aki) recently made a case study that demonstrates how to implement user defined probability functions in Stan language (case study, git repo). As an example I use the generalized Pareto distribution (GPD) to model extreme values of geomagnetic storm data from the World Data Center for Geomagnetism. Stan has had support for user defined […]

Computing marginal likelihoods in Stan, from Quentin Gronau and E. J. Wagenmakers

Gronau and Wagemakers write: The bridgesampling package facilitates the computation of the marginal likelihood for a wide range of different statistical models. For models implemented in Stan (such that the constants are retained), executing the code bridge_sampler(stanfit) automatically produces an estimate of the marginal likelihood. Full story is at the link.

Stan Roundup, 10 November 2017

We’re in the heart of the academic season and there’s a lot going on. James Ramsey reported a critical performance regression bug in Stan 2.17 (this affects the latest CmdStan and PyStan, not the latest RStan). Sean Talts and Daniel Lee diagnosed the underlying problem as being with the change from char* to std::string arguments—you […]

Using Stan to improve rice yields

Matt Espe writes: Here is a new paper citing Stan and the rstanarm package. Yield gap analysis of US rice production systems shows opportunities for improvement. Matthew B. Espe, Kenneth G. Cassman, Haishun Yang, Nicolas Guilpart, Patricio Grassini, Justin Van Wart, Merle Anders, Donn Beighley, Dustin Harrell, Steve Linscombe, Kent McKenzie, Randall Mutters, Lloyd T. […]