Skip to content
Archive of posts filed under the Statistical computing category.

Boston Stan meetup 1 Dec

Here’s the announcement: Using Stan for variational inference, plus a couple lightning talks Dustin Tran will give a talk on using Stan for variational inference, then we’ll have a couple lightening (5 minute-ish) talks on projects. David Sparks will talk, I will talk about some of my work and we’re looking for 1-2 more volunteers. […]

Flatten your abs with this new statistical approach to quadrature

Philipp Hennig, Michael Osborne, and Mark Girolami write: We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. . . . We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. […]

Stan Puzzle 2: Distance Matrix Parameters

This puzzle comes in three parts. There are some hints at the end. Part I: Constrained Parameter Definition Define a Stan program with a transformed matrix parameter d that is constrained to be a K by K distance matrix. Recall that a distance matrix must satisfy the definition of a metric for all i, j: […]

Pareto smoothed importance sampling and infinite variance (2nd ed)

This post is by Aki Last week Xi’an blogged about an arXiv paper by Chatterjee and Diaconis which considers the proper sample size in an importance sampling setting with infinite variance. I commented Xi’an’s posting and the end result was my guest blog posting in Xi’an’s og. I made an additional figure below to summarise […]

Bayesian Computing: Adventures on the Efficient Frontier

That’s the title of my forthcoming talk at the Nips workshop at 9am on 12 Dec.

This is a workshop you can’t miss: DataMeetsViz

This looks like it was a great conference with an all-star lineup of speakers. You can click through and see the talks.

4 for 4.0 — The Latest JAGS

This post is by Bob Carpenter. I just saw over on Martyn Plummer’s JAGS News blog that JAGS 4.0 is out. Martyn provided a series of blog posts highlighting the new features: 1. Reproducibility: Examples will now be fully reproducible draw-for-draw and chain-for-chain with the same seed. (Of course, compiler, optimization level, platform, CPU, and […]

2 new thoughts on Cauchy priors for logistic regression coefficients

Aki noticed this paper, On the Use of Cauchy Prior Distributions for Bayesian Logistic Regression, by Joyee Ghosh, Yingbo Li, and Robin Mitra, which begins: In logistic regression, separation occurs when a linear combination of the predictors can perfectly classify part or all of the observations in the sample, and as a result, finite maximum […]

3 reasons why you can’t always use predictive performance to choose among models

A couple years ago Wei and I published a paper, Difficulty of selecting among multilevel models using predictive accuracy, in which we . . . well, we discussed the difficulty of selecting among multilevel models using predictive accuracy. The paper happened as follows. We’d been fitting hierarchical logistic regressions of poll data and I had […]

3 postdoc opportunities you can’t miss—here in our group at Columbia! Apply NOW, don’t miss out!

Hey, just once, the Buzzfeed-style hype is appropriate. We have 3 amazing postdoc opportunities here, and you need to apply NOW. Here’s the deal: we’re working on some amazing projects. You know about Stan and associated exciting projects in computational statistics. There’s the virtual database query, which is the way I like to describe our […]

You’ll never guess what’s been happening with PyStan and PyMC—Click here to find out.

PLEASE NOTE: This is a guest post by Llewelyn Richards-Ward. When there are two packages appearing to do the same thing, lets return to the Zen of Python which suggests that: There should be one—and preferably only one—obvious way to do it. Why is this particular mantra important? I think because the majority of users […]

Stan intro in Amherst, Mass.

Krzysztof Sakrejda writes: I’m doing a brief intro to Stan Thursday 4:30pm in Amherst at the University of Massachusetts. As the meetup blurb indicates I’m not going to attempt a full tour but I will try to touch on all the pieces required to make it easier to build on models from the manual and […]

PMXStan: an R package to facilitate Bayesian PKPD modeling with Stan

From Yuan Xiong, David A James, Fei He, and Wenping Wang at Novartis. Full version of the poster here.

Comparing Waic (or loo, or any other predictive error measure)

Ed Green writes: I have fitted 5 models in Stan and computed WAIC and its standard error for each. The standard errors are all roughly the same (all between 209 and 213). If WAIC_1 is within one standard error (of WAIC_1) of WAIC_2, is it fair to say that WAIC is inconclusive? My reply: No, […]

Stan PK/PD Tutorial at the American Conference on Pharmacometrics, 8 Oct 2015

Bill Gillespie, of Metrum, is giving a tutorial next week at ACoP: Getting Started with Bayesian PK/PD Modeling Using Stan: Practical use of Stan and R for PK/PD applications Thursday 8 October 2015, 8 AM — 5 PM, Crystal City, VA This is super cool for us, because Bill’s not one of our core developers […]

Fitting models with discrete parameters in Stan

This book, “Bayesian Cognitive Modeling: A Practical Course,” by Michael Lee and E. J. Wagenmakers, has a bunch of examples of Stan models with discrete parameters—mixture models of various sorts—with Stan code written by Martin Smira! It’s a good complement to the Finite Mixtures chapter in the Stan manual.

The Final Bug, or, Please please please please please work this time!

I’ve been banging my head against this problem, on and off, for a couple months now. It’s an EP-like algorithm that a collaborator and I came up with for integrating external aggregate data into a Bayesian analysis. My colleague tried a simpler version on an example and it worked fine, then I’ve been playing around […]

Stan Puzzle #1: Inferring Ability from Streaks

Inspired by X’s blog’s Le Monde puzzle entries, I have a little Stan coding puzzle for everyone (though you can solve the probabilty part of the coding problem without actually knowing Stan). This almost (heavy emphasis on “almost” there) makes me wish I was writing exams. Puzzle #1: Inferring Ability from Streaks Suppose a player […]

PK/PD Talk with Stan — Thu 8 Oct, 10:30 AM at Columbia: Improved confidence intervals and p-values by sampling from the normalized likelihood

Sebastian Ueckert and France MentrĂ© are swinging by to visit the Stan team at Columbia and Sebastian’s presenting the following talk, to which everyone is invited. Improved confidence intervals and p-values by sampling from the normalized likelihood Sebastian Ueckert (1,2), Marie-Karelle Riviere (1), France MentrĂ© (1) (1) IAME, UMR 1137, INSERM and University Paris Diderot, […]

ShinyStan v2.0.0

For those of you not familiar with ShinyStan, it is a graphical user interface for exploring Stan models (and more generally MCMC output from any software). For context, here’s the post on this blog first introducing ShinyStan (formerly shinyStan) from earlier this year. ShinyStan v2.0.0 released ShinyStan v2.0.0 is now available on CRAN. This is […]