Skip to content
Archive of posts filed under the Statistical computing category.

4 for 4.0 — The Latest JAGS

This post is by Bob Carpenter. I just saw over on Martyn Plummer’s JAGS News blog that JAGS 4.0 is out. Martyn provided a series of blog posts highlighting the new features: 1. Reproducibility: Examples will now be fully reproducible draw-for-draw and chain-for-chain with the same seed. (Of course, compiler, optimization level, platform, CPU, and […]

2 new thoughts on Cauchy priors for logistic regression coefficients

Aki noticed this paper, On the Use of Cauchy Prior Distributions for Bayesian Logistic Regression, by Joyee Ghosh, Yingbo Li, and Robin Mitra, which begins: In logistic regression, separation occurs when a linear combination of the predictors can perfectly classify part or all of the observations in the sample, and as a result, finite maximum […]

3 reasons why you can’t always use predictive performance to choose among models

A couple years ago Wei and I published a paper, Difficulty of selecting among multilevel models using predictive accuracy, in which we . . . well, we discussed the difficulty of selecting among multilevel models using predictive accuracy. The paper happened as follows. We’d been fitting hierarchical logistic regressions of poll data and I had […]

3 postdoc opportunities you can’t miss—here in our group at Columbia! Apply NOW, don’t miss out!

Hey, just once, the Buzzfeed-style hype is appropriate. We have 3 amazing postdoc opportunities here, and you need to apply NOW. Here’s the deal: we’re working on some amazing projects. You know about Stan and associated exciting projects in computational statistics. There’s the virtual database query, which is the way I like to describe our […]

You’ll never guess what’s been happening with PyStan and PyMC—Click here to find out.

PLEASE NOTE: This is a guest post by Llewelyn Richards-Ward. When there are two packages appearing to do the same thing, lets return to the Zen of Python which suggests that: There should be one—and preferably only one—obvious way to do it. Why is this particular mantra important? I think because the majority of users […]

Stan intro in Amherst, Mass.

Krzysztof Sakrejda writes: I’m doing a brief intro to Stan Thursday 4:30pm in Amherst at the University of Massachusetts. As the meetup blurb indicates I’m not going to attempt a full tour but I will try to touch on all the pieces required to make it easier to build on models from the manual and […]

PMXStan: an R package to facilitate Bayesian PKPD modeling with Stan

From Yuan Xiong, David A James, Fei He, and Wenping Wang at Novartis. Full version of the poster here.

Comparing Waic (or loo, or any other predictive error measure)

Ed Green writes: I have fitted 5 models in Stan and computed WAIC and its standard error for each. The standard errors are all roughly the same (all between 209 and 213). If WAIC_1 is within one standard error (of WAIC_1) of WAIC_2, is it fair to say that WAIC is inconclusive? My reply: No, […]

Stan PK/PD Tutorial at the American Conference on Pharmacometrics, 8 Oct 2015

Bill Gillespie, of Metrum, is giving a tutorial next week at ACoP: Getting Started with Bayesian PK/PD Modeling Using Stan: Practical use of Stan and R for PK/PD applications Thursday 8 October 2015, 8 AM — 5 PM, Crystal City, VA This is super cool for us, because Bill’s not one of our core developers […]

Fitting models with discrete parameters in Stan

This book, “Bayesian Cognitive Modeling: A Practical Course,” by Michael Lee and E. J. Wagenmakers, has a bunch of examples of Stan models with discrete parameters—mixture models of various sorts—with Stan code written by Martin Smira! It’s a good complement to the Finite Mixtures chapter in the Stan manual.

The Final Bug, or, Please please please please please work this time!

I’ve been banging my head against this problem, on and off, for a couple months now. It’s an EP-like algorithm that a collaborator and I came up with for integrating external aggregate data into a Bayesian analysis. My colleague tried a simpler version on an example and it worked fine, then I’ve been playing around […]

Stan Puzzle #1: Inferring Ability from Streaks

Inspired by X’s blog’s Le Monde puzzle entries, I have a little Stan coding puzzle for everyone (though you can solve the probabilty part of the coding problem without actually knowing Stan). This almost (heavy emphasis on “almost” there) makes me wish I was writing exams. Puzzle #1: Inferring Ability from Streaks Suppose a player […]

PK/PD Talk with Stan — Thu 8 Oct, 10:30 AM at Columbia: Improved confidence intervals and p-values by sampling from the normalized likelihood

Sebastian Ueckert and France Mentré are swinging by to visit the Stan team at Columbia and Sebastian’s presenting the following talk, to which everyone is invited. Improved confidence intervals and p-values by sampling from the normalized likelihood Sebastian Ueckert (1,2), Marie-Karelle Riviere (1), France Mentré (1) (1) IAME, UMR 1137, INSERM and University Paris Diderot, […]

ShinyStan v2.0.0

For those of you not familiar with ShinyStan, it is a graphical user interface for exploring Stan models (and more generally MCMC output from any software). For context, here’s the post on this blog first introducing ShinyStan (formerly shinyStan) from earlier this year. ShinyStan v2.0.0 released ShinyStan v2.0.0 is now available on CRAN. This is […]

Stan at JSM2015

In addition to Jigiang’s talk on Stan, 11:25 AM on Wednesday, I’ll also be giving a talk about Hamiltonian Monte Carlo today at 3:20 PM.  Stanimals in attendance can come find me to score a sweet Stan sticker. And everyone should check out Andrew’s breakout performance in “A Stan is Born”. Update: Turns out I missed even […]

How Hamiltonian Monte Carlo works

Marco Inancio posted this one on the Stan users list: ( Statement 1) If the kinetic energy equation comes from a distribution $L$ which is not a symmetric distribution, then thanks to the “Conservation of the Hamiltonian” property we’ll still be able to accept the proposal with probability 1 if we are computing the Hamiltonian’s […]

If you leave your datasets sitting out on the counter, they get moldy

I received the following in the email: I had a look at the dataset on speed dating you put online, and I found some big inconsistencies. Since a lot of people are using it, I hope this can help to fix them (or hopefully I did a mistake in interpreting the dataset). Here are the […]

Stan is Turing complete

Stan is Turing complete.

New papers on LOO/WAIC and Stan

Aki, Jonah, and I have released the much-discussed paper on LOO and WAIC in Stan: Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models. We (that is, Aki) now recommend LOO rather than WAIC, especially now that we have an R function to quickly compute LOO using Pareto smoothed importance sampling. In […]

An Excel add-in for regression analysis

Bob Nau writes: I know you are not particularly fond of Excel, but you might (I hope) be interested in a free Excel add-in for multivariate data analysis and linear regression that I am distributing here: http://regressit.com. I originally developed it for teaching an advanced MBA elective course on regression and time series analysis at […]