Skip to content
Archive of posts filed under the Statistical computing category.

If you leave your datasets sitting out on the counter, they get moldy

I received the following in the email: I had a look at the dataset on speed dating you put online, and I found some big inconsistencies. Since a lot of people are using it, I hope this can help to fix them (or hopefully I did a mistake in interpreting the dataset). Here are the […]

Stan is Turing complete

Stan is Turing complete.

New papers on LOO/WAIC and Stan

Aki, Jonah, and I have released the much-discussed paper on LOO and WAIC in Stan: Efficient implementation of leave-one-out cross-validation and WAIC for evaluating fitted Bayesian models. We (that is, Aki) now recommend LOO rather than WAIC, especially now that we have an R function to quickly compute LOO using Pareto smoothed importance sampling. In […]

An Excel add-in for regression analysis

Bob Nau writes: I know you are not particularly fond of Excel, but you might (I hope) be interested in a free Excel add-in for multivariate data analysis and linear regression that I am distributing here: http://regressit.com. I originally developed it for teaching an advanced MBA elective course on regression and time series analysis at […]

Short course on Bayesian data analysis and Stan 19-21 July in NYC!

Bob Carpenter, Daniel Lee, and I are giving a 3-day short course in two weeks. Before class everyone should install R, RStudio and RStan on their computers. If problems occur please join the stan-users group and post any questions. It’s important that all participants get Stan running and bring their laptops to the course. Class […]

Introducing StataStan

Thanks to Robert Grant, we now have a Stata interface! For more details, see: Robert Grant’s Blog:   Introducing StataStan Jonah and Ben have already kicked the tires, and it works. We’ll be working on it more as time goes on as part of our Institute of Education Sciences grant (turns out education researchers use […]

JuliaCon 2015 (24–27 June, Boston-ish)

JuliaCon is coming to Cambridge, MA the geek capital of the East Coast: 24–27 June. Here’s the conference site with program. I (Bob) will be giving a 10 minute “lightning talk” on Stan.jl, the Julia interface to Stan (built by Rob J. Goedman — I’m just pinch hitting because Rob couldn’t make it). The uptake […]

Cross-validation != magic

In a post entitled “A subtle way to over-fit,” John Cook writes: If you train a model on a set of data, it should fit that data well. The hope, however, is that it will fit a new set of data well. So in machine learning and statistics, people split their data into two parts. […]

New Alan Turing preprint on Arxiv!

Dan Kahan writes: I know you are on 30-day delay, but since the blog version of you will be talking about Bayesian inference in couple of hours, you might like to look at paper by Turing, who is on 70-yr delay thanks to British declassification system, who addresses the utility of using likelihood ratios for […]

Bob Carpenter’s favorite books on GUI design and programming

Bob writes: I would highly recommend two books that changed the way I thought about GUI design (though I’ve read a lot of them): * Jeff Johnson. GUI Bloopers. I read the first edition in book form and the second in draft form (the editor contacted me based on my enthusiastic Amazon feedback, which was […]

A silly little error, of the sort that I make every day

Ummmm, running Stan, testing out a new method we have that applies EP-like ideas to perform inference with aggregate data—it’s really cool, I’ll post more on it once we’ve tried everything out and have a paper that’s in better shape—anyway, I’m starting with a normal example, a varying-intercept, varying-slope model where the intercepts have population […]

Causal Impact from Google

Bill Harris writes: Did you see http://blog.revolutionanalytics.com/2014/09/google-uses-r-to-calculate-roi-on-advertising-campaigns.html? Would that be something worth a joint post and discussion from you and Judea? I then wrote: Interesting. It seems to all depend on the choice of “control time series.” That said, it could still be a useful method. Bill replied: The good: Bayesian approaches made very approachable […]

Interactive demonstrations for linear and Gaussian process regressions

Here’s a cool interactive demo of linear regression where you can grab the data points, move them around, and see the fitted regression line changing. There are various such apps around, but this one is particularly clean: (I’d like to credit the creator but I can’t find any attribution at the link, except that it’s […]

Defaults, once set, are hard to change.

So. Farewell then Rainbow color scheme. You reigned in Matlab Far too long. But now that You are no longer The default, Will we miss you? We can only Visualize. E. T. Thribb (17 1/2) Here’s the background.  Brad Stiritz writes: I know you’re a creator and big proponent of open-source tools. Given your strong interest […]

My talk tomorrow (Thurs) at MIT political science: Recent challenges and developments in Bayesian modeling and computation (from a political and social science perspective)

It’s 1pm in room E53-482. I’ll talk about the usual stuff (and some of this too, I guess).

One simple trick to make Stan run faster

Did you know that Stan automatically runs in parallel (and caches compiled models) from R if you do this: source(“http://mc-stan.org/rstan/stan.R”) (editor: deprecated as of RStan 2.7.0; use at your own peril) It’s from Stan core developer Ben Goodrich. This simple line of code has changed my life. A factor-of-4 speedup might not sound like much, […]

Introducing shinyStan

As a project for Andrew’s Statistical Communication and Graphics graduate course at Columbia, a few of us (Michael Andreae, Yuanjun Gao, Dongying Song, and I) had the goal of giving RStan’s print and plot functions a makeover. We ended up getting a bit carried away and instead we designed a graphical user interface for interactively exploring virtually […]

VB-Stan: Black-box black-box variational Bayes

Alp Kucukelbir, Rajesh Ranganath, Dave Blei, and I write: We describe an automatic variational inference method for approximating the posterior of differentiable probability models. Automatic means that the statistician only needs to define a model; the method forms a variational approximation, computes gradients using automatic differentiation and approximates expectations via Monte Carlo integration. Stochastic gradient […]

Stan Down Under

I (Bob, not Andrew) am in Australia until April 30. I’ll be giving some Stan-related and some data annotation talks, several of which have yet to be concretely scheduled. I’ll keep this page updated with what I’ll be up to. All of the talks other than summer school will be open to the public (the […]

This has nothing to do with the Super Bowl

Joshua Vogelstein writes: The Open Connectome Project at Johns Hopkins University invites outstanding candidates to apply for a postdoctoral or assistant research scientist position in the area of statistical machine learning for big brain imaging data. Our workflow is tightly vertically integrated, ranging from raw data to theory to answering neuroscience questions and back again. […]