Skip to content
Archive of posts filed under the Multilevel Modeling category.

Combining forecasts: Evidence on the relative accuracy of the simple average and Bayesian model averaging for predicting social science problems

Andreas Graefe sends along this paper (with Helmut Kuchenhoff, Veronika Stierle, and Bernhard Riedl) and writes: We summarize prior evidence from the field of economic forecasting and find that the simple average was more accurate than Bayesian model averaging in three of four studies; on average, the error of BMA was 6% higher than the […]

Spring forward, fall back, drop dead?

Antonio Rinaldi points me to a press release describing a recent paper by Amneet Sandhu, Milan Seth, and Hitinder Gurm, where I got the above graphs (sorry about the resolution, that’s the best I could do). Here’s the press release: Data from the largest study of its kind in the U.S. reveal a 25 percent […]

Regression and causality and variable ordering

Bill Harris wrote in with a question: David Hogg points out in one of his general articles on data modeling that regression assumptions require one to put the variable with the highest variance in the ‘y’ position and the variable you know best (lowest variance) in the ‘x’ position. As he points out, others speak […]

Identifying pathways for managing multiple disturbances to limit plant invasions

Andrew Tanentzap, William Lee, Adrian Monks, Kate Ladley, Peter Johnson, Geoffrey Rogers, Joy Comrie, Dean Clarke, and Ella Hayman write: We tested a multivariate hypothesis about the causal mechanisms underlying plant invasions in an ephemeral wetland in South Island, New Zealand to inform management of this biodiverse but globally imperilled habitat. . . . We […]

Bayesian nonparametric weighted sampling inference

Yajuan Si, Natesh Pillai, and I write: It has historically been a challenge to perform Bayesian inference in a design-based survey context. The present paper develops a Bayesian model for sampling inference using inverse-probability weights. We use a hierarchical approach in which we model the distribution of the weights of the nonsampled units in the […]

Big Data needs Big Model

Gary Marcus and Ernest Davis wrote this useful news article on the promise and limitations of “big data.” And let me add this related point: Big data are typically not random samples, hence the need for “big model” to map from sample to population. Here’s an example (with Wei Wang, David Rothschild, and Sharad Goel):

How much can we learn about individual-level causal claims from state-level correlations?

Hey, we all know the answer: “correlation does not imply causation”—but of course life is more complicated than that. As philosophers, economists, statisticians, and others have repeatedly noted, most of our information about the world is observational not experimental, yet we manage to draw causal conclusions all the time. Sure, some of these conclusions are […]

Seth Roberts

I met Seth back in the early 1990s when we were both professors at the University of California. He sometimes came to the statistics department seminar and we got to talking about various things; in particular we shared an interest in statistical graphics. Much of my work in this direction eventually went toward the use […]

Bayesian Uncertainty Quantification for Differential Equations!

Mark Girolami points us to this paper and software (with Oksana Chkrebtii, David Campbell, and Ben Calderhead). They write: We develop a general methodology for the probabilistic integration of differential equations via model based updating of a joint prior measure on the space of functions and their temporal and spatial derivatives. This results in a […]

Crowdstorming a dataset

Raphael Silberzahn writes: Brian Nosek, Eric Luis Uhlmann, Dan Martin, and I just launched a project through the Open Science Center we think you’ll find interesting. The basic idea is to “Crowdstorm a Dataset”. Multiple independent analysts are recruited to test the same hypothesis on the same data set in whatever manner they see as […]