Skip to content
Archive of posts filed under the Statistical computing category.

Lasso regression etc in Stan

Someone on the users list asked about lasso regression in Stan, and Ben replied: In the rstanarm package we have stan_lm(), which is sort of like ridge regression, and stan_glm() with family = gaussian and prior = laplace() or prior = lasso(). The latter estimates the shrinkage as a hyperparameter while the former fixes it […]

HMMs in Stan? Absolutely!

I was having a conversation with Andrew that went like this yesterday: Andrew: Hey, someone’s giving a talk today on HMMs (that someone was Yang Chen, who was giving a talk based on her JASA paper Analyzing single-molecule protein transportation experiments via hierarchical hidden Markov models). Maybe we should add some specialized discrete modules to […]

You can fit hidden Markov models in Stan (and thus, also in Stata! and Python! and R! and Julia! and Matlab!)

You can fit finite mixture models in Stan; see section 12 of the Stan manual. You can fit change point models in Stan; see section 14.2 of the Stan manual. You can fit mark-recapture models in Stan; see section 14.2 of the Stan manual. You can fit hidden Markov models in Stan; see section 9.6 […]

Thanks for attending StanCon 2017!

Thank you all for coming and making the first Stan Conference a success! The organizers were blown away by how many people came to the first conference. We had over 150 registrants this year! StanCon 2017 Video The organizers managed to get a video stream on YouTube: https://youtu.be/DJ0c7Bm5Djk. We have over 1900 views since StanCon! (We lost […]

Come and work with us!

Stan is an open-source, state-of-the-art probabilistic programming language with a high-performance Bayesian inference engine written in C++. Stan had been successfully applied to modeling problems with hundreds of thousands of parameters in fields as diverse as econometrics, sports analytics, physics, pharmacometrics, recommender systems, political science, and many more. Research using Stan has been featured in […]

Stan is hiring! hiring! hiring! hiring!

[insert picture of adorable cat entwined with Stan logo] We’re hiring postdocs to do Bayesian inference. We’re hiring programmers for Stan. We’re hiring a project manager. How many people we hire depends on what gets funded. But we’re hiring a few people for sure. We want the best best people who love to collaborate, who […]

Stan JSS paper out: “Stan: A probabilistic programming language”

As a surprise welcome to 2017, our paper on how the Stan language works along with an overview of how the MCMC and optimization algorithms work hit the stands this week. Bob Carpenter, Andrew Gelman, Matthew D. Hoffman, Daniel Lee, Ben Goodrich, Michael Betancourt, Marcus Brubaker, Jiqiang Guo, Peter Li, and Allen Riddell. 2017. Stan: […]

“A Conceptual Introduction to Hamiltonian Monte Carlo”

Michael Betancourt writes: Hamiltonian Monte Carlo has proven a remarkable empirical success, but only recently have we begun to develop a rigorous understanding of why it performs so well on difficult problems and how it is best applied in practice. Unfortunately, that understanding is con- fined within the mathematics of differential geometry which has limited […]

Michael found the bug in Stan’s new sampler

Gotcha! Michael found the bug! That was a lot of effort, during which time he produced ten pages of dense LaTeX to help Daniel and me understand the algorithm enough to help debug (we’re trying to write a bunch of these algorithmic details up for a more general audience, so stay tuned). So what was […]

“The Fundamental Incompatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsampling”

Here’s Michael Betancourt writing in 2015: Leveraging the coherent exploration of Hamiltonian flow, Hamiltonian Monte Carlo produces computationally efficient Monte Carlo estimators, even with respect to complex and high-dimensional target distributions. When confronted with data-intensive applications, however, the algorithm may be too expensive to implement, leaving us to consider the utility of approximations such as […]

Some U.S. demographic data at zipcode level conveniently in R

Ari Lamstein writes: I chuckled when I read your recent “R Sucks” post. Some of the comments were a bit … heated … so I thought to send you an email instead. I agree with your point that some of the datasets in R are not particularly relevant. The way that I’ve addressed that is […]

Deep learning, model checking, AI, the no-homunculus principle, and the unitary nature of consciousness

Bayesian data analysis, as my colleagues and I have formulated it, has a human in the loop. Here’s how we put it on the very first page of our book: The process of Bayesian data analysis can be idealized by dividing it into the following three steps: 1. Setting up a full probability model—a joint […]

Only on the internet . . .

I had this bizarrely escalating email exchange. It started with this completely reasonable message: Professor, I was unable to run your code here: https://www.r-bloggers.com/downloading-option-chain-data-from-google-finance-in-r-an-update/ Besides a small typo [you have a 1 after names (options)], the code fails when you actually run the function. The error I get is a lexical error: Error: lexical error: […]

Kaggle Kernels

Anthony Goldbloom writes: In late August, Kaggle launched an open data platform where data scientists can share data sets. In the first few months, our members have shared over 300 data sets on topics ranging from election polls to EEG brainwave data. It’s only a few months old, but it’s already a rich repository for […]

Stan Webinar, Stan Classes, and StanCon

This post is by Eric. We have a number of Stan related events in the pipeline. On 22 Nov, Ben Goodrich and I will be holding a free webinar called Introduction to Bayesian Computation Using the rstanarm R Package. Here is the abstract: The goal of the rstanarm package is to make it easier to use Bayesian […]

Stan Case Studies: A good way to jump in to the language

Wanna learn Stan? Everybody’s talking bout it. Here’s a way to jump in: Stan Case Studies. Find one you like and try it out. P.S. I blogged this last month but it’s so great I’m blogging it again. For this post, the target audience is not already-users of Stan but new users.

Recently in the sister blog and elsewhere

Why it can be rational to vote (see also this by Robert Wiblin, “Why the hour you spend voting is the most socially impactful of all”) Be skeptical when polls show the presidential race swinging wildly The polls of the future will be reproducible and open source Testing the role of convergence in language acquisition, […]

Why I prefer 50% rather than 95% intervals

I prefer 50% to 95% intervals for 3 reasons: 1. Computational stability, 2. More intuitive evaluation (half the 50% intervals should contain the true value), 3. A sense that in aplications it’s best to get a sense of where the parameters and predicted values will be, not to attempt an unrealistic near-certainty. This came up […]

Michael Betancourt has made NUTS even more awesome and efficient!

In an beautiful new paper, Betancourt writes: The geometric foundations of Hamiltonian Monte Carlo implicitly identify the optimal choice of [tuning] parameters, especially the integration time. I then consider the practical consequences of these principles in both existing algorithms and a new implementation called Exhaustive Hamiltonian Monte Carlo [XMC] before demonstrating the utility of these […]

Some modeling and computational ideas to look into

Can we implement these in Stan? Marginally specified priors for non-parametric Bayesian estimation (by David Kessler, Peter Hoff, and David Dunson): Prior specification for non-parametric Bayesian inference involves the difficult task of quantifying prior knowledge about a parameter of high, often infinite, dimension. A statistician is unlikely to have informed opinions about all aspects of […]