Gotcha! Michael found the bug! That was a lot of effort, during which time he produced ten pages of dense LaTeX to help Daniel and me understand the algorithm enough to help debug (we’re trying to write a bunch of these algorithmic details up for a more general audience, so stay tuned). So what was […]

**Stan**category.

## Stan 2.10 through Stan 2.13 produce biased samples

[Update: bug found! See the follow-up post, Michael found the bug in Stan’s new sampler] [Update: rolled in info from comments.] After all of our nagging of people to use samplers that produce unbiased samples, we are mortified to have to announce that Stan versions 2.10 through 2.13 produce biased samples. The issue Thanks to […]

## Using Stan in an agent-based model: Simulation suggests that a market could be useful for building public consensus on climate change

Jonathan Gilligan writes: I’m writing to let you know about a preprint that uses Stan in what I think is a novel manner: Two graduate students and I developed an agent-based simulation of a prediction market for climate, in which traders buy and sell securities that are essentially bets on what the global average temperature […]

## Interesting epi paper using Stan

Jon Zelner writes: Just thought I’d send along this paper by Justin Lessler et al. Thought it was both clever & useful and a nice ad for using Stan for epidemiological work. Basically, what this paper is about is estimating the true prevalence and case fatality ratio of MERS-CoV [Middle East Respiratory Syndrome Coronavirus Infection] […]

## Kaggle Kernels

Anthony Goldbloom writes: In late August, Kaggle launched an open data platform where data scientists can share data sets. In the first few months, our members have shared over 300 data sets on topics ranging from election polls to EEG brainwave data. It’s only a few months old, but it’s already a rich repository for […]

## Stan Webinar, Stan Classes, and StanCon

This post is by Eric. We have a number of Stan related events in the pipeline. On 22 Nov, Ben Goodrich and I will be holding a free webinar called Introduction to Bayesian Computation Using the rstanarm R Package. Here is the abstract: The goal of the rstanarm package is to make it easier to use Bayesian […]

## What if NC is a tie and FL is a close win for Clinton?

On the TV they said that they were guessing that Clinton would win Florida in a close race and that North Carolina was too close to call. Let’s run the numbers, Kremp: > update_prob2(clinton_normal=list(“NC”=c(50,2), “FL”=c(52,2))) Pr(Clinton wins the electoral college) = 95% That’s good news for Clinton. What if both states are tied? > update_prob2(clinton_normal=list(“NC”=c(50,2), […]

## Election updating software update

When going through the Pierre-Antoine Kremp’s election forecasting updater program, we saw that it ran into difficulties when we started to supply information from lots of states. It was a problem with the program’s rejection sampling algorithm. Kremp updated the program to allow an option where you could specify the winner in each state, and […]

## Now that 7pm has come, what do we know?

(followup to this post) On TV they said that Trump won Kentucky and Indiana (no surprise), Clinton won Vermont (really no surprise), but South Carolina, Georgia, and Virginia were too close to call. I’ll run Pierre-Antoine Kremp’s program conditioning on this information, coding states that are “too close to call” as being somewhere between 45% […]

## What is the chance that your vote will decide the election? Ask Stan!

I was impressed by Pierre-Antoine Kremp’s open-source poll aggregator and election forecaster (all in R and Stan with an automatic data feed!) so I wrote to Kremp: I was thinking it could be fun to compute probability of decisive vote by state, as in this paper. This can be done with some not difficult but […]

## Why I prefer 50% rather than 95% intervals

I prefer 50% to 95% intervals for 3 reasons: 1. Computational stability, 2. More intuitive evaluation (half the 50% intervals should contain the true value), 3. A sense that in aplications it’s best to get a sense of where the parameters and predicted values will be, not to attempt an unrealistic near-certainty. This came up […]

## Michael Betancourt has made NUTS even more awesome and efficient!

In an beautiful new paper, Betancourt writes: The geometric foundations of Hamiltonian Monte Carlo implicitly identify the optimal choice of [tuning] parameters, especially the integration time. I then consider the practical consequences of these principles in both existing algorithms and a new implementation called Exhaustive Hamiltonian Monte Carlo [XMC] before demonstrating the utility of these […]

## Some modeling and computational ideas to look into

Can we implement these in Stan? Marginally specified priors for non-parametric Bayesian estimation (by David Kessler, Peter Hoff, and David Dunson): Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of […]

## “It’s not reproducible if it only runs on your laptop”: Jon Zelner’s tips for a reproducible workflow in R and Stan

Jon Zelner writes: Reproducibility is becoming more and more a part of the conversation when it comes to public health and social science research. . . . But comparatively little has been said about another dimension of the reproducibility crisis, which is the difficulty of re-generating already-complete analyses using the exact same input data. But […]

## Yes, despite what you may have heard, you can easily fit hierarchical mixture models in Stan

There was some confusion on the Stan list that I wanted to clear up, having to do with fitting mixture models. Someone quoted this from John Kruschke’s book, Doing Bayesian Data Analysis: The lack of discrete parameters in Stan means that we cannot do model comparison as a hierarchical model with an indexical parameter at […]

## Practical Bayesian model evaluation in Stan and rstanarm using leave-one-out cross-validation

Our (Aki, Andrew and Jonah) paper Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC was recently published in Statistics and Computing. In the paper we show why it’s better to use LOO instead of WAIC for model evaluation how to compute LOO quickly and reliably using the full posterior sample how Pareto smoothing importance […]

## Mathematica, now with Stan

Vincent Picaud developed a Mathematica interface to Stan: MathematicaStan You can find everything you need to get started by following the link above. If you have questions, comments, or suggestions, please let us know through the Stan user’s group or the GitHub issue tracker. MathematicaStan interfaces to Stan through a CmdStan process. Stan programs are […]