Skip to content
Archive of posts filed under the Bayesian Statistics category.

I already know who will be president in 2016 but I’m not telling

Nadia Hassan writes: One debate in political science right now concerns how the economy influences voters. Larry Bartels argues that Q14 and Q15 impact election outcomes the most. Doug Hibbs argues that all 4 years matter, with later growth being more important. Chris Wlezien claims that the first two years don’t influence elections but the […]

4 California faculty positions in Design-Based Statistical Inference in the Social Sciences

This is really cool. The announcement comes from Joe Cummins: The University of California at Riverside is hiring 4 open rank positions in Design-Based Statistical Inference in the Social Sciences. I [Cummins] think this is a really exciting opportunity for researchers doing all kinds of applied social science statistical work, especially work that cuts across […]

Stan Puzzle 2: Distance Matrix Parameters

This puzzle comes in three parts. There are some hints at the end. Part I: Constrained Parameter Definition Define a Stan program with a transformed matrix parameter d that is constrained to be a K by K distance matrix. Recall that a distance matrix must satisfy the definition of a metric for all i, j: […]

Pareto smoothed importance sampling and infinite variance (2nd ed)

This post is by Aki Last week Xi’an blogged about an arXiv paper by Chatterjee and Diaconis which considers the proper sample size in an importance sampling setting with infinite variance. I commented Xi’an’s posting and the end result was my guest blog posting in Xi’an’s og. I made an additional figure below to summarise […]

Inference from an intervention with many outcomes, not using “statistical significance”

Kate Casey writes: I have been reading your papers “Type S error rates for classical…” and “Why We (Usually) Don’t Have to Worry…” with great interest and would be grateful for your views on the appropriateness of a potentially related application. I have a non-hierarchical dataset of 28 individuals who participated in a randomized control […]

Bayesian Computing: Adventures on the Efficient Frontier

That’s the title of my forthcoming talk at the Nips workshop at 9am on 12 Dec.

“Using prediction markets to estimate the reproducibility of scientific research”

A reporter sent me this new paper by Anna Dreber, Thomas Pfeiffer, Johan Almenberg, Siri Isaksson, Brad Wilson, Yiling Chen, Brian Nosek, and Magnus Johannesson, which begins: Concerns about a lack of reproducibility of statistically significant results have recently been raised in many fields, and it has been argued that this lack comes at substantial […]

You won’t believe these stunning transformations: How to parameterize hyperpriors in hierarchical models?

Isaac Armstrong writes: I was working through your textbook “Data Analysis Using Regression and Multilevel/Hierarchical Models” but wanted to learn more and started working through your “Bayesian Data Analysis” text. I’ve got a few questions about your rat tumor example that I’d like to ask. I’ve been trying to understand one of the hierarchical models […]

3 new priors you can’t do without, for coefficients and variance parameters in multilevel regression

Partha Lahiri writes, in reference to my 2006 paper: I am interested in finding out a good prior for the regression coefficients and variance components in a multi-level setting. For concreteness, let’s say we have a model like the following: Level 1: Y_ijk | theta_ij ~(ind) N( theta_ij, sigma^2) Level 2: theta_ij| mu_i ~(ind) N( […]

4 for 4.0 — The Latest JAGS

This post is by Bob Carpenter. I just saw over on Martyn Plummer’s JAGS News blog that JAGS 4.0 is out. Martyn provided a series of blog posts highlighting the new features: 1. Reproducibility: Examples will now be fully reproducible draw-for-draw and chain-for-chain with the same seed. (Of course, compiler, optimization level, platform, CPU, and […]

Hey—looky here! This business wants to hire a Stan expert for decision making.

Kevin Van Horn writes: I currently work in a business analytics group at Symantec, and we have several positions to fill. I’d like at least one of those positions to be filled by someone who understands Bayesian modeling and is comfortable using R (or Python) and Stan (or other MCMC tools). The team’s purpose is […]

Neuroscience research in Baltimore

Joshua Vogelstein sends along these ads for students, research associates, and postdocs in his lab at Johns Hopkins University:

2 new thoughts on Cauchy priors for logistic regression coefficients

Aki noticed this paper, On the Use of Cauchy Prior Distributions for Bayesian Logistic Regression, by Joyee Ghosh, Yingbo Li, and Robin Mitra, which begins: In logistic regression, separation occurs when a linear combination of the predictors can perfectly classify part or all of the observations in the sample, and as a result, finite maximum […]

3 reasons why you can’t always use predictive performance to choose among models

A couple years ago Wei and I published a paper, Difficulty of selecting among multilevel models using predictive accuracy, in which we . . . well, we discussed the difficulty of selecting among multilevel models using predictive accuracy. The paper happened as follows. We’d been fitting hierarchical logistic regressions of poll data and I had […]

3 postdoc opportunities you can’t miss—here in our group at Columbia! Apply NOW, don’t miss out!

Hey, just once, the Buzzfeed-style hype is appropriate. We have 3 amazing postdoc opportunities here, and you need to apply NOW. Here’s the deal: we’re working on some amazing projects. You know about Stan and associated exciting projects in computational statistics. There’s the virtual database query, which is the way I like to describe our […]

Click here to get FREE tix to my webinar with Brad Efron this Wednesday!

The Royal Statistical Society (U.K.) has organized a discussion of a new paper, Frequentist accuracy of Bayesian estimates, by Brad Efron. The discussion will be an online event (a “webinar”) on 21 Oct 2015 (that’s right, “Back to the Future Day”) at noon 11am eastern time (4pm in the U.K.). Brad will present, I’ll ask […]

Explaining to Gilovich about the hot hand

X points me to this news article by George Johnson regarding the hot hand in basketball. Nothing new since the previous hot hand report (also Johnson follows the usual newspaper convention of not citing the earlier article in the Wall Street Journal, instead simply linking back to the Miller and Sanjurjo article as if it […]

You’ll never guess what’s been happening with PyStan and PyMC—Click here to find out.

PLEASE NOTE: This is a guest post by Llewelyn Richards-Ward. When there are two packages appearing to do the same thing, lets return to the Zen of Python which suggests that: There should be one—and preferably only one—obvious way to do it. Why is this particular mantra important? I think because the majority of users […]

What do you learn from p=.05? This example from Carl Morris will blow your mind.

I keep pointing people to this article by Carl Morris so I thought I’d post it. The article is really hard to find because it has no title: it appeared in the Journal of the American Statistical Association as a discussion of a couple of other papers. All 3 scenarios have the same p-value. And, […]

Stan intro in Amherst, Mass.

Krzysztof Sakrejda writes: I’m doing a brief intro to Stan Thursday 4:30pm in Amherst at the University of Massachusetts. As the meetup blurb indicates I’m not going to attempt a full tour but I will try to touch on all the pieces required to make it easier to build on models from the manual and […]