Archive of posts filed under the Bayesian Statistics category.

## N=1 survey tells me Cynthia Nixon will lose by a lot (no joke)

Yes, you can learn a lot from N=1, as long as you have some auxiliary information. The other day I was talking with a friend who’s planning to vote for Andrew Cuomo in the primary. What about Cynthia Nixon? My friend wasn’t even considering voting for her. Now, my friend is, I think, in the […]

## Discussion of effects of growth mindset: Let’s not demand unrealistic effect sizes.

Shreeharsh Kelkar writes: As a regular reader of your blog, I wanted to ask you if you had taken a look at the recent debate about growth mindset [see earlier discussions here and here] that happened on theconversation.com. Here’s the first salvo by Brooke McNamara, and then the response by Carol Dweck herself. The debate […]

## Against Arianism 2: Arianism Grande

“There’s the part you’ve braced yourself against, and then there’s the other part” – The Mountain Goats My favourite genre of movie is Nicole Kidman in a questionable wig. (Part of the sub-genre founded by Sarah Paulson, who is the patron saint of obvious wigs.) And last night I was in the same room* as […]

## “Dynamically Rescaled Hamiltonian Monte Carlo for Bayesian Hierarchical Models”

Aki points us to this paper by Tore Selland Kleppe, which begins: Dynamically rescaled Hamiltonian Monte Carlo (DRHMC) is introduced as a computationally fast and easily implemented method for performing full Bayesian analysis in hierarchical statistical models. The method relies on introducing a modified parameterisation so that the re-parameterised target distribution has close to constant […]

## StanCon 2018 Helsinki tutorial videos online

StanCon 2018 Helsinki tutorial videos are now online at Stan YouTube channel List of tutorials at StanCon 2018 Helsinki Basics of Bayesian inference and Stan, parts 1 + 2, Jonah Gabry & Lauren Kennedy Hierarchical models, parts 1 + 2, Ben Goodrich Stan C++ development: Adding a new function to Stan, parts 1 + 2, […]

## Hey—take this psychological science replication quiz!

Rob Wilbin writes: I made this quiz where people try to guess ahead of time which results will replicate and which won’t in order to give then a more nuanced understanding of replication issues in psych. Based on this week’s Nature replication paper. It includes quotes and p-values from the original study if people want […]

## StanCon Helsinki streaming live now (and tomorrow)

We’re streaming live right now! Thursday 08:45-17:30: YouTube Link Friday 09:00-17:00: YouTube Link Timezone is Eastern European Summer Time (EEST) +0300 UTC Here’s a link to the full program [link fixed]. There have already been some great talks and they’ll all be posted with slides and runnable source code after the conference on the Stan […]

## “To get started, I suggest coming up with a simple but reasonable model for missingness, then simulate fake complete data followed by a fake missingness pattern, and check that you can recover your missing-data model and your complete data model in that fake-data situation. You can then proceed from there. But if you can’t even do it with fake data, you’re sunk.”

Alex Konkel writes on a topic that never goes out of style: I’m working on a data analysis plan and am hoping you might help clarify something you wrote regarding missing data. I’m somewhat familiar with multiple imputation and some of the available methods, and I’m also becoming more familiar with Bayesian modeling like in […]

## Bayesian model comparison in ecology

Conor Goold writes: I was reading this overview of mixed-effect modeling in ecology, and thought you or your blog readers may be interested in their last conclusion (page 35): Other modelling approaches such as Bayesian inference are available, and allow much greater flexibility in choice of model structure, error structure and link function. However, the […]

## Against Arianism

“I need some love like I’ve never needed love before” – Geri, Mel C, Mel B, Victoria, Emma (noted Arianists)  I spent most of today on a sequence of busses shuttling between cities in Ontario, so I’ve been thinking a lot about fourth century heresies.  That’s an obvious lie. But I think we all know […]

## The fallacy of the excluded middle — statistical philosophy edition

I happened to come across this post from 2012 and noticed a point I’d like to share again. I was discussing an article by David Cox and Deborah Mayo, in which Cox wrote: [Bayesians’] conceptual theories are trying to do two entirely different things. One is trying to extract information from the data, while the […]

## Three informal case studies: (1) Monte Carlo EM, (2) a new approach to C++ matrix autodiff with closures, (3) C++ serialization via parameter packs

Andrew suggested I cross-post these from the Stan forums to his blog, so here goes. Maximum marginal likelihood and posterior approximations with Monte Carlo expectation maximization: I unpack the goal of max marginal likelihood and approximate Bayes with MMAP and Laplace approximations. I then go through the basic EM algorithm (with a traditional analytic example […]

## “The most important aspect of a statistical analysis is not what you do with the data, it’s what data you use” (survey adjustment edition)

Dean Eckles pointed me to this recent report by Andrew Mercer, Arnold Lau, and Courtney Kennedy of the Pew Research Center, titled, “For Weighting Online Opt-In Samples, What Matters Most? The right variables make a big difference for accuracy. Complex statistical methods, not so much.” I like most of what they write, but I think […]

## When LOO and other cross-validation approaches are valid

Introduction Zacco asked in Stan discourse whether leave-one-out (LOO) cross-validation is valid for phylogenetic models. He also referred to Dan’s excellent blog post which mentioned iid assumption. Instead of iid it would be better to talk about exchangeability assumption, but I (Aki) got a bit lost in my discourse answer (so don’t bother to go […]

## Continuous tempering through path sampling

Yuling prepared this poster summarizing our recent work on path sampling using a continuous joint distribution. The method is really cool and represents a real advance over what Xiao-Li and I were doing in our 1998 paper. It’s still gonna have problems in high or even moderate dimensions, and ultimately I think we’re gonna need […]

## Awesome MCMC animation site by Chi Feng! On Github!

Sean Talts and Bob Carpenter pointed us to this awesome MCMC animation site by Chi Feng. For instance, here’s NUTS on a banana-shaped density. This is indeed super-cool, and maybe there’s a way to connect these with Stan/ShinyStan/Bayesplot so as to automatically make movies of Stan model fits. This would be great, both to help […]

## Parsimonious principle vs integration over all uncertainties

tl;dr If you have bad models, bad priors or bad inference choose the simplest possible model. If you have good models, good priors, good inference, use the most elaborate model for predictions. To make interpretation easier you may use a smaller model with similar predictive performance as the most elaborate model. Merijn Mestdagh emailed me […]

## “The idea of replication is central not just to scientific practice but also to formal statistics . . . Frequentist statistics relies on the reference set of repeated experiments, and Bayesian statistics relies on the prior distribution which represents the population of effects.”

Rolf Zwaan (who we last encountered here in “From zero to Ted talk in 18 simple steps”), Alexander Etz, Richard Lucas, and M. Brent Donnellan wrote an article, “Making replication mainstream,” which begins: Many philosophers of science and methodologists have argued that the ability to repeat studies and obtain similar results is an essential component […]

## Mister P wins again

Chad Kiewiet De Jonge, Gary Langer, and Sofi Sinozich write: This paper presents state-level estimates of the 2016 presidential election using data from the ABC News/Washington Post tracking poll and multilevel regression with poststratification (MRP). While previous implementations of MRP for election forecasting have relied on data from prior elections to establish poststratification targets for […]

## “Bayesian Meta-Analysis with Weakly Informative Prior Distributions”

Donny Williams sends along this paper, with Philippe Rast and Paul-Christian Bürkner, and writes: This paper is similar to the Chung et al. avoiding boundary estimates papers (here and here), but we use fully Bayesian methods, and specifically the half-Cauchy prior. We show it has as good of performance as a fully informed prior based […]