Here’s the announcement: Using Stan for variational inference, plus a couple lightning talks Dustin Tran will give a talk on using Stan for variational inference, then we’ll have a couple lightening (5 minute-ish) talks on projects. David Sparks will talk, I will talk about some of my work and we’re looking for 1-2 more volunteers. […]
This is really cool. The announcement comes from Joe Cummins: The University of California at Riverside is hiring 4 open rank positions in Design-Based Statistical Inference in the Social Sciences. I [Cummins] think this is a really exciting opportunity for researchers doing all kinds of applied social science statistical work, especially work that cuts across […]
This puzzle comes in three parts. There are some hints at the end. Part I: Constrained Parameter Definition Define a Stan program with a transformed matrix parameter d that is constrained to be a K by K distance matrix. Recall that a distance matrix must satisfy the definition of a metric for all i, j: […]
That’s the title of my forthcoming talk at the Nips workshop at 9am on 12 Dec.
3 new priors you can’t do without, for coefficients and variance parameters in multilevel regression
Partha Lahiri writes, in reference to my 2006 paper: I am interested in finding out a good prior for the regression coefficients and variance components in a multi-level setting. For concreteness, let’s say we have a model like the following: Level 1: Y_ijk | theta_ij ~(ind) N( theta_ij, sigma^2) Level 2: theta_ij| mu_i ~(ind) N( […]
Earlier today I discussed a paper by Anne Case and Angus Deaton in which they noted an increase in mortality rates among non-Hispanic white Americans from 1989 to 2013, a pattern that stood in sharp contrast to a decrease in several other rich countries and among U.S. Hispanics as well: Interpretation of this graph is […]
Kevin Van Horn writes: I currently work in a business analytics group at Symantec, and we have several positions to fill. I’d like at least one of those positions to be filled by someone who understands Bayesian modeling and is comfortable using R (or Python) and Stan (or other MCMC tools). The team’s purpose is […]
Aki noticed this paper, On the Use of Cauchy Prior Distributions for Bayesian Logistic Regression, by Joyee Ghosh, Yingbo Li, and Robin Mitra, which begins: In logistic regression, separation occurs when a linear combination of the predictors can perfectly classify part or all of the observations in the sample, and as a result, finite maximum […]
Joshua Vogelstein points me to this article by Alexander Franks, Andrew Miller, Luke Bornn, and Kirk Goldsberry and writes: For some reason, I feel like you’d care about this article, and the resulting discussion on your blog would be fun. Hey—label your lines directly! Cool! Ummm . . . no. No. Really, really, really, really […]
Hey, just once, the Buzzfeed-style hype is appropriate. We have 3 amazing postdoc opportunities here, and you need to apply NOW. Here’s the deal: we’re working on some amazing projects. You know about Stan and associated exciting projects in computational statistics. There’s the virtual database query, which is the way I like to describe our […]
Corey Yanofsky pointed me to a paper by Neal Beck, Estimating grouped data models with a binary dependent variable and fixed effects: What are the issues?, which begins: This article deals with a very simple issue: if we have grouped data with a binary dependent variable and want to include fixed effects (group specific intercepts) […]
PLEASE NOTE: This is a guest post by Llewelyn Richards-Ward. When there are two packages appearing to do the same thing, lets return to the Zen of Python which suggests that: There should be one—and preferably only one—obvious way to do it. Why is this particular mantra important? I think because the majority of users […]
Krzysztof Sakrejda writes: I’m doing a brief intro to Stan Thursday 4:30pm in Amherst at the University of Massachusetts. As the meetup blurb indicates I’m not going to attempt a full tour but I will try to touch on all the pieces required to make it easier to build on models from the manual and […]
From Yuan Xiong, David A James, Fei He, and Wenping Wang at Novartis. Full version of the poster here.
Ed Green writes: I have fitted 5 models in Stan and computed WAIC and its standard error for each. The standard errors are all roughly the same (all between 209 and 213). If WAIC_1 is within one standard error (of WAIC_1) of WAIC_2, is it fair to say that WAIC is inconclusive? My reply: No, […]
If you missed it the first time around, here’s a link to: Stan Puzzle 1: Inferring Ability from Streaks First, a hat-tip to Mike, who posted the correct answer as a comment. So as not to spoil the surprise for everyone else, Michael Betancourt (different Mike), emailed me the answer right away (as he always […]
I guess people really do read the Wall Street Journal . . . Edward Adelman sent me the above clipping and calculation and writes: What am I missing? I do not see the 60%. And Richard Rasiej sends me a longer note making the same point: So here I am, teaching another statistics class, this […]
RStan 2.8.0 is available on CRAN! Installation directions can be found on RStan’s Wiki. And since I know a lot of people aren’t patient enough to read through installation instructions, the most important parts are: You (still) need a C++ toolchain. Mac: XCode. Make sure to open it once after download to accept the license. […]
This book, “Bayesian Cognitive Modeling: A Practical Course,” by Michael Lee and E. J. Wagenmakers, has a bunch of examples of Stan models with discrete parameters—mixture models of various sorts—with Stan code written by Martin Smira! It’s a good complement to the Finite Mixtures chapter in the Stan manual.
Inspired by X’s blog’s Le Monde puzzle entries, I have a little Stan coding puzzle for everyone (though you can solve the probabilty part of the coding problem without actually knowing Stan). This almost (heavy emphasis on “almost” there) makes me wish I was writing exams. Puzzle #1: Inferring Ability from Streaks Suppose a player […]