Skip to content
Archive of posts filed under the Bayesian Statistics category.

What I got wrong (and right) about econometrics and unbiasedness

Yesterday I spoke at the Princeton economics department. The title of my talk was: “Unbiasedness”: You keep using that word. I do not think it means what you think it means. The talk went all right—people seemed ok with what I was saying—but I didn’t see a lot of audience involvement. It was a bit […]

“The general problem I have with noninformatively-derived Bayesian probabilities is that they tend to be too strong.”

We interrupt our usual programming of mockery of buffoons to discuss a bit of statistical theory . . . Continuing from yesterday‘s quotation of my 2012 article in Epidemiology: Like many Bayesians, I have often represented classical confidence intervals as posterior probability intervals and interpreted one-sided p-values as the posterior probability of a positive effect. […]

Good, mediocre, and bad p-values

From my 2012 article in Epidemiology: In theory the p-value is a continuous measure of evidence, but in practice it is typically trichotomized approximately into strong evidence, weak evidence, and no evidence (these can also be labeled highly significant, marginally significant, and not statistically significant at conventional levels), with cutoffs roughly at p=0.01 and 0.10. […]

Carl Morris: Man Out of Time [reflections on empirical Bayes]

I wrote the following for the occasion of his recent retirement party but I thought these thoughts might of general interest: When Carl Morris came to our department in 1989, I and my fellow students were so excited. We all took his class. The funny thing is, though, the late 1980s might well have been […]

Instead of worrying about multiple hypothesis correction, just fit a hierarchical model.

Pejman Mohammadi writes: I’m concerned with a problem in multiple hypothesis correction and, despite having read your article [with Jennifer and Masanao] on not being concerned about it, I was hoping I could seek your advice. Specifically, I’m interested in multiple hypothesis testing problem in cases when the test is done with a discrete finite […]

New Book: Bayesian Data Analysis in Ecology Using Linear Models with R, BUGS, and Stan

Fränzi and Tobias‘s book is now real: Fränzi Korner-Nievergelt, Tobias Roth, Stefanie von Felten, Jérôme Guélat, Bettina Almasi, and Pius Korner-Nievergelt (2015) Bayesian Data Analysis in Ecology Using Linear Models with R, BUGS, and Stan. Academic Press. This is based in part on the in-person tutorials that they and the other authors have been giving […]

Go to PredictWise for forecast probabilities of events in the news

I like it. Clear, transparent, no mumbo jumbo about their secret sauce. But . . . what’s with the hyper-precision: C’mon. “27.4%”? Who are you kidding?? (See here for explication of this point.)

Item-response and ideal point models

To continue from today’s class, here’s what we’ll be discussing next time: – Estimating the direction and the magnitude of the discrimination parameters. – How to tell when your data don’t fit the model. – When does ideal-point modeling make a difference? Comparing ideal-point estimates to simple averages of survey responses. P.S. Unlike the previous […]

Why do we communicate probability calculations so poorly, even when we know how to do it better?

Haynes Goddard writes: I thought to do some reading in psychology on why Bayesian probability seems so counterintuitive, and making it difficult for many to learn and apply. Indeed, that is the finding of considerable research in psychology. It turns out that it is counterintuitive because of the way it is presented, following no doubt […]

New research in tuberculosis mapping and control

Mapping and control. Or, as we would say, descriptive and causal inference. Jon Zelner informs os about two ongoing research projects: 1. TB Hotspot Mapping: Over the summer, I [Zelner] put together a really simple R package to do non-parametric disease mapping using the distance-based mapping approach developed by Caroline Jeffery and Al Ozonoff at […]

Comparison of Bayesian predictive methods for model selection

This post is by Aki We mention the problem of bias induced by model selection in A survey of Bayesian predictive methods for model assessment, selection and comparison, in Understanding predictive information criteria for Bayesian models, and in BDA3 Chapter 7, but we haven’t had a good answer how to avoid that problem (except by […]

But when you call me Bayesian, I know I’m not the only one

Textbooks on statistics emphasize care and precision, via concepts such as reliability and validity in measurement, random sampling and treatment assignment in data collection, and causal identification and bias in estimation. But how do researchers decide what to believe and what to trust when choosing which statistical methods to use? How do they decide the […]

Regression: What’s it all about? [Bayesian and otherwise]

Regression: What’s it all about? Regression plays three different roles in applied statistics: 1. A specification of the conditional expectation of y given x; 2. A generative model of the world; 3. A method for adjusting data to generalize from sample to population, or to perform causal inferences. We could also include prediction, but I […]

The publication of one of my pet ideas: Simulation-efficient shortest probability intervals

In a paper to appear in Statistics and Computing, Ying Liu, Tian Zheng, and I write: Bayesian highest posterior density (HPD) intervals can be estimated directly from simulations via empirical shortest intervals. Unfortunately, these can be noisy (that is, have a high Monte Carlo error). We derive an optimal weighting strategy using bootstrap and quadratic […]

Adiabatic as I wanna be: Or, how is a chess rating like classical economics?

Chess ratings are all about change. Did your rating go up, did it go down, have you reached 2000, who’s hot, who’s not, and so on. If nobody’s abilities were changing, chess ratings would be boring, they’d be nothing but a noisy measure, and watching your rating change would be as exciting as watching a […]

Paul Meehl continues to be the boss

Lee Sechrest writes: Here is a remarkable paper, not well known, by Paul Meehl. My research group is about to undertake a fresh discussion of it, which we do about every five or ten years. The paper is now more than a quarter of a century old but it is, I think, dramatically pertinent to […]

Why I don’t use the terms “fixed” and “random” (again)

A couple months ago we discussed this question from Sean de Hoon: In many cross-national comparative studies, mixed effects models are being used in which a number of slopes are fixed and the slopes of one or two variables of interested are allowed to vary across countries. The aim is often then to explain the […]

Bayesian models, causal inference, and time-varying exposures

Mollie Wood writes: I am a doctoral student in clinical and population health research. My dissertation research is on prenatal medication exposure and neurodevelopmental outcomes in children, and I’ve encountered a difficult problem that I hope you might be able to advise me on. I am working on a problem in which my main exposure […]

Interactive demonstrations for linear and Gaussian process regressions

Here’s a cool interactive demo of linear regression where you can grab the data points, move them around, and see the fitted regression line changing. There are various such apps around, but this one is particularly clean: (I’d like to credit the creator but I can’t find any attribution at the link, except that it’s […]

My talk tomorrow (Thurs) at MIT political science: Recent challenges and developments in Bayesian modeling and computation (from a political and social science perspective)

It’s 1pm in room E53-482. I’ll talk about the usual stuff (and some of this too, I guess).