Here’s the announcement: Using Stan for variational inference, plus a couple lightning talks Dustin Tran will give a talk on using Stan for variational inference, then we’ll have a couple lightening (5 minute-ish) talks on projects. David Sparks will talk, I will talk about some of my work and we’re looking for 1-2 more volunteers. […]
Philipp Hennig, Michael Osborne, and Mark Girolami write: We deliver a call to arms for probabilistic numerical methods: algorithms for numerical tasks, including linear algebra, integration, optimization and solving differential equations, that return uncertainties in their calculations. . . . We describe how several seminal classic numerical methods can be interpreted naturally as probabilistic inference. […]
This puzzle comes in three parts. There are some hints at the end. Part I: Constrained Parameter Definition Define a Stan program with a transformed matrix parameter d that is constrained to be a K by K distance matrix. Recall that a distance matrix must satisfy the definition of a metric for all i, j: […]
This post is by Aki Last week Xi’an blogged about an arXiv paper by Chatterjee and Diaconis which considers the proper sample size in an importance sampling setting with infinite variance. I commented Xi’an’s posting and the end result was my guest blog posting in Xi’an’s og. I made an additional figure below to summarise […]
That’s the title of my forthcoming talk at the Nips workshop at 9am on 12 Dec.
This looks like it was a great conference with an all-star lineup of speakers. You can click through and see the talks.
This post is by Bob Carpenter. I just saw over on Martyn Plummer’s JAGS News blog that JAGS 4.0 is out. Martyn provided a series of blog posts highlighting the new features: 1. Reproducibility: Examples will now be fully reproducible draw-for-draw and chain-for-chain with the same seed. (Of course, compiler, optimization level, platform, CPU, and […]
Aki noticed this paper, On the Use of Cauchy Prior Distributions for Bayesian Logistic Regression, by Joyee Ghosh, Yingbo Li, and Robin Mitra, which begins: In logistic regression, separation occurs when a linear combination of the predictors can perfectly classify part or all of the observations in the sample, and as a result, finite maximum […]
A couple years ago Wei and I published a paper, Difficulty of selecting among multilevel models using predictive accuracy, in which we . . . well, we discussed the difficulty of selecting among multilevel models using predictive accuracy. The paper happened as follows. We’d been fitting hierarchical logistic regressions of poll data and I had […]
Hey, just once, the Buzzfeed-style hype is appropriate. We have 3 amazing postdoc opportunities here, and you need to apply NOW. Here’s the deal: we’re working on some amazing projects. You know about Stan and associated exciting projects in computational statistics. There’s the virtual database query, which is the way I like to describe our […]
PLEASE NOTE: This is a guest post by Llewelyn Richards-Ward. When there are two packages appearing to do the same thing, lets return to the Zen of Python which suggests that: There should be one—and preferably only one—obvious way to do it. Why is this particular mantra important? I think because the majority of users […]
Krzysztof Sakrejda writes: I’m doing a brief intro to Stan Thursday 4:30pm in Amherst at the University of Massachusetts. As the meetup blurb indicates I’m not going to attempt a full tour but I will try to touch on all the pieces required to make it easier to build on models from the manual and […]
From Yuan Xiong, David A James, Fei He, and Wenping Wang at Novartis. Full version of the poster here.
Ed Green writes: I have fitted 5 models in Stan and computed WAIC and its standard error for each. The standard errors are all roughly the same (all between 209 and 213). If WAIC_1 is within one standard error (of WAIC_1) of WAIC_2, is it fair to say that WAIC is inconclusive? My reply: No, […]
Bill Gillespie, of Metrum, is giving a tutorial next week at ACoP: Getting Started with Bayesian PK/PD Modeling Using Stan: Practical use of Stan and R for PK/PD applications Thursday 8 October 2015, 8 AM — 5 PM, Crystal City, VA This is super cool for us, because Bill’s not one of our core developers […]
This book, “Bayesian Cognitive Modeling: A Practical Course,” by Michael Lee and E. J. Wagenmakers, has a bunch of examples of Stan models with discrete parameters—mixture models of various sorts—with Stan code written by Martin Smira! It’s a good complement to the Finite Mixtures chapter in the Stan manual.
I’ve been banging my head against this problem, on and off, for a couple months now. It’s an EP-like algorithm that a collaborator and I came up with for integrating external aggregate data into a Bayesian analysis. My colleague tried a simpler version on an example and it worked fine, then I’ve been playing around […]
Inspired by X’s blog’s Le Monde puzzle entries, I have a little Stan coding puzzle for everyone (though you can solve the probabilty part of the coding problem without actually knowing Stan). This almost (heavy emphasis on “almost” there) makes me wish I was writing exams. Puzzle #1: Inferring Ability from Streaks Suppose a player […]
PK/PD Talk with Stan — Thu 8 Oct, 10:30 AM at Columbia: Improved confidence intervals and p-values by sampling from the normalized likelihood
Sebastian Ueckert and France Mentré are swinging by to visit the Stan team at Columbia and Sebastian’s presenting the following talk, to which everyone is invited. Improved confidence intervals and p-values by sampling from the normalized likelihood Sebastian Ueckert (1,2), Marie-Karelle Riviere (1), France Mentré (1) (1) IAME, UMR 1137, INSERM and University Paris Diderot, […]
For those of you not familiar with ShinyStan, it is a graphical user interface for exploring Stan models (and more generally MCMC output from any software). For context, here’s the post on this blog first introducing ShinyStan (formerly shinyStan) from earlier this year. ShinyStan v2.0.0 released ShinyStan v2.0.0 is now available on CRAN. This is […]