Skip to content
Archive of posts filed under the Bayesian Statistics category.

Postdoc opportunity at AstraZeneca in Cambridge, England, in Bayesian Machine Learning using Stan!

Here it is: Predicting drug toxicity with Bayesian machine learning models We’re currently looking for talented scientists to join our innovative academic-style Postdoc. From our centre in Cambridge, UK you’ll be in a global pharmaceutical environment, contributing to live projects right from the start. You’ll take part in a comprehensive training programme, including a focus […]

Psychometrics corner: They want to fit a multilevel model instead of running 37 separate correlation analyses

Anouschka Foltz writes: One of my students has some data, and there is an issue with multiple comparisons. While trying to find out how to best deal with the issue, I came across your article with Martin Lindquist, “Correlations and Multiple Comparisons in Functional Imaging: A Statistical Perspective.” And while my student’s work does not […]

You better check yo self before you wreck yo self

We (Sean Talts, Michael Betancourt, Me, Aki, and Andrew) just uploaded a paper (code available here) that outlines a framework for verifying that an algorithm for computing a posterior distribution has been implemented correctly. It is easy to use, straightforward to implement, and ready to be implemented as part of a Bayesian workflow. This type of […]

Using partial pooling when preparing data for machine learning applications

Geoffrey Simmons writes: I reached out to John Mount/Nina Zumel over at Win Vector with a suggestion for their vtreat package, which automates many common challenges in preparing data for machine learning applications. The default behavior for impact coding high-cardinality variables had been a naive bayes approach, which I found to be problematic due its multi-modal output (assigning […]

loo 2.0 is loose

This post is by Jonah and Aki. We’re happy to announce the release of v2.0.0 of the loo R package for efficient approximate leave-one-out cross-validation (and more). For anyone unfamiliar with the package, the original motivation for its development is in our paper: Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model evaluation […]

Generable: They’re building software for pharma, with Stan inside.

Daniel Lee writes: We’ve just launched our new website. Generable is where precision medicine meets statistical machine learning. We are building a state-of-the-art platform to make individual, patient-level predictions for safety and efficacy of treatments. We’re able to do this by building Bayesian models with Stan. We currently have pilots with AstraZeneca, Sanofi, and University […]

The Millennium Villages Project: a retrospective, observational, endline evaluation

Shira Mitchell et al. write (preprint version here if that link doesn’t work): The Millennium Villages Project (MVP) was a 10 year, multisector, rural development project, initiated in 2005, operating across ten sites in ten sub-Saharan African countries to achieve the Millennium Development Goals (MDGs). . . . In this endline evaluation of the MVP, […]

Fitting a hierarchical model without losing control

Tim Disher writes: I have been asked to run some regularized regressions on a small N high p situation, which for the primary outcome has lead to more realistic coefficient estimates and better performance on cv (yay!). Rstanarm made this process very easy for me so I am grateful for it. I have now been […]

“The Internal and External Validity of the Regression Discontinuity Design: A Meta-Analysis of 15 Within-Study-Comparisons”

Jag Bhalla points to this post by Alex Tabarrok pointing to this paper, “The Internal and External Validity of the Regression Discontinuity Design: A Meta-Analysis of 15 Within-Study-Comparisons,” by Duncan Chaplin, Thomas Cook, Jelena Zurovac, Jared Coopersmith, Mariel Finucane, Lauren Vollmer, and Rebecca Morris, which reports that regression discontinuity (RD) estimation performed well in these […]

Justify my love

When everyone starts walking around the chilly streets of Toronto looking like they’re cosplaying the last 5 minutes of Call Me By Your Name, you know that Spring is in the air. Let’s celebrate the end of winter by pulling out our Liz Phair records, our slightly less-warm coats, and our hunger for long reads […]

Judgment Under Uncertainty: Heuristics and Biases

There are some people I’ve never met who send me scientific papers to comment on for the blog. The other day one of these people sent me one of these: it was a published paper covering several topics on which I am an expert, and it seemed like it could be interesting but at the […]

Mitzi’s talk on spatial models in Ann Arbor, Thursday 5 April 2018

Mitzi returns to her alma mater to give a talk at joint meeting of the Ann Arbor useR and ASA Meetups: Spatial models in Stan Abstract This case study shows how to efficiently encode and compute an intrinsic conditional autoregressive (ICAR) model in Stan. When data has a neighborhood structure, ICAR models provide spatial smoothing […]

Combining Bayesian inferences from many fitted models

Renato Frey writes: I’m curious about your opinion on combining multi-model inference techniques with rstanarm: On the one hand, screening all (theoretically meaningful) model specifications and fully reporting them seems to make a lot of sense to me — in line with the idea of transparent reporting, your idea of the multiverse analysis, or akin […]

Heuristics and Biases? Laplace was there, 200 years ago.

In an article entitled Laplace’s Theories of Cognitive Illusions, Heuristics, and Biases, Josh “hot hand” Miller and I write: In his book from the early 1800s, Essai Philosophique sur les Probabilités, the mathematician Pierre-Simon de Laplace anticipated many ideas developed in the 1970s in cognitive psychology and behavioral economics, explaining human tendencies to deviate from […]

Bayesian inference for A/B testing: Lauren Kennedy and I speak at the NYC Women in Machine Learning and Data Science meetup tomorrow (Tues 27 Mar) 7pm

Here it is: Bayesian inference for A/B testing Andrew Gelman, Department of Statistics and Department of Political Science, Columbia University Lauren Kennedy, Columbia Population Research Center, Columbia University Suppose we want to use empirical data to compare two or more decisions or treatment options. Classical statistical methods based on statistical significance and p-values break down […]

“The problem of infra-marginality in outcome tests for discrimination”

Camelia Simoiu, Sam Corbett-Davies, and Sharad Goel write: Outcome tests are a popular method for detecting bias in lending, hiring, and policing decisions. These tests operate by comparing the success rate of decisions across groups. For example, if loans made to minority applicants are observed to be repaid more often than loans made to whites, […]

An economist wrote in, asking why it would make sense to fit Bayesian hierarchical models instead of frequentist random effects.

An economist wrote in, asking why it would make sense to fit Bayesian hierarchical models instead of frequentist random effects. My reply: Short answer is that anything Bayesian can be done non-Bayesianly: just take some summary of the posterior distribution, call it an “estimator,” and there you go. Non-Bayesian can be regularized, it can use […]

Last lines of George V. Higgins

Wonderful Years, Wonderful Years ends with this beautiful quote: “Everybody gets just about what they want. It’s just, they don’t recognize it, they get it. It doesn’t look the same as what they had in mind.” The conclusion of Trust: “What ever doesn’t kill us, makes us strong,” Cobb said. “Fuck Nietzsche,” Beale said. “He’s […]

What We Talk About When We Talk About Bias

Shira Mitchell wrote: I gave a talk today at Mathematica about NHST in low power settings (Type M/S errors). It was fun and the discussion was great. One thing that came up is bias from doing some kind of regularization/shrinkage/partial-pooling versus selection bias (confounding, nonrandom samples, etc). One difference (I think?) is that the first […]

Bob’s talk at Berkeley, Thursday 22 March, 3 pm

It’s at the Institute for Data Science at Berkeley. Hierarchical Modeling in Stan for Pooling, Prediction, and Multiple Comparisons 22 March 2018, 3pm 190 Doe Library. UC Berkeley. And here’s the abstract: I’ll provide an end-to-end example of using R and Stan to carry out full Bayesian inference for a simple set of repeated binary […]