Skip to content
Archive of posts filed under the Multilevel Modeling category.

354 possible control groups; what to do?

Jonas Cederlöf writes: I’m a PhD student in economics at Stockholm University and a frequent reader of your blog. I have for a long time followed your quest in trying to bring attention to p-hacking and multiple comparison problems in research. I’m now myself faced with the aforementioned problem and want to at the very […]

N=1 experiments and multilevel models

N=1 experiments are the hot new thing. Here are some things to read: Design and Implementation of N-of-1 Trials: A User’s Guide, edited by Richard Kravitz and Naihua Duan for the Agency for Healthcare Research and Quality, U.S. Department of Health and Human Services (2014). Single-patient (n-of-1) trials: a pragmatic clinical decision methodology for patient-centered […]

Looking at all possible comparisons at once: It’s not “overfitting” if you put it in a multilevel model

Rémi Gau writes: The human brain mapping conference is on these days and heard via tweeter about this Overfitting toolbox for fMRI studies that helps explore the multiplicity of analytical pipelines in a more systematic fashion. Reminded me a bit of your multiverse analysis: thought you might like the idea. The link is to a […]

Stacking and multiverse

It’s a coincidence that there is another multiverse posting today. Recently Tim Disher asked in Stan discussion forum a question “Multiverse analysis – concatenating posteriors?” Tim refers to a paper “Increasing Transparency Through a Multiverse Analysis” by Sara Steegen, Francis Tuerlinckx, Andrew Gelman, and Wolf Vanpaemel. The abstract says Empirical research inevitably includes constructing a […]

The multiverse in action!

In a recent paper, “Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking,” Jelte Wicherts, Coosje Veldkamp, Hilde Augusteijn, Marjan Bakker, Robbie van Aert, and Marcel van Assen write: The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use […]

How to get a sense of Type M and type S errors in neonatology, where trials are often very small? Try fake-data simulation!

Tim Disher read my paper with John Carlin, “Beyond Power Calculations: Assessing Type S (Sign) and Type M (Magnitude) Errors,” and followed up with a question: I am a doctoral student conducting research within the field of neonatology, where trials are often very small, and I have long suspected that many intervention effects are potentially […]

A Python program for multivariate missing-data imputation that works on large datasets!?

Alex Stenlake and Ranjit Lall write about a program they wrote for imputing missing data: Strategies for analyzing missing data have become increasingly sophisticated in recent years, most notably with the growing popularity of the best-practice technique of multiple imputation. However, existing algorithms for implementing multiple imputation suffer from limited computational efficiency, scalability, and capacity […]

“Handling Multiplicity in Neuroimaging through Bayesian Lenses with Hierarchical Modeling”

Donald Williams points us to this new paper by Gang Chen, Yaqiong Xiao, Paul Taylor, Tracy Riggins, Fengji Geng, Elizabeth Redcay, and Robert Cox: In neuroimaging, the multiplicity issue may sneak into data analysis through several channels . . . One widely recognized aspect of multiplicity, multiple testing, occurs when the investigator fits a separate […]

A debate about robust standard errors: Perspective from an outsider

A colleague pointed me to a debate among some political science methodologists about robust standard errors, and I told him that the topic didn’t really interest me because I haven’t found a use for robust standard errors in my own work. My colleague urged me to look at the debate more carefully, though, so I […]

The piranha problem in social psychology / behavioral economics: The “take a pill” model of science eats itself

[cat picture] A fundamental tenet of social psychology, behavioral economics, at least how it is presented in the news media, and taught and practiced in many business schools, is that small “nudges,” often the sorts of things that we might not think would affect us at all, can have big effects on behavior. Thus the […]

The Night Riders

Gilbert Chin writes: After reading this piece [“How one 19-year-old Illinois man Is distorting national polling averages,” by Nate Cohn] and this Nature news story [“Seeing deadly mutations in an new light,” by Erika Hayden], I wonder if you might consider blogging about how this appears to be the same issue in two different disciplines. […]

Orphan drugs and forking paths: I’d prefer a multilevel model but to be honest I’ve never fit such a model for this sort of problem

Amos Elberg writes: I’m writing to let you know about a drug trial you may find interesting from a statistical perspective. As you may know, the relatively recent “orphan drug” laws allow (basically) companies that can prove an off-patent drug treats an otherwise untreatable illness, to obtain intellectual property protection for otherwise generic or dead […]

Poisoning the well with a within-person design? What’s the risk?

I was thinking more about our recommendation that psychology researchers routinely use within-person rather than between-person designs. The quick story is that a within-person design is more statistically efficient because, when you compare measurements within a person, you should get less variation than when you compare different groups. But researchers often use between-person designs out […]

Wine + Stan + Climate change = ?

Pablo Almaraz writes: Recently, I published a paper in the journal Climate Research in which I used RStan to conduct the statistical analyses: Almaraz P (2015) Bordeaux wine quality and climate fluctuations during the last century: changing temperatures and changing industry. Clim Res 64:187-199.

Spatial models for demographic trends?

Jon Minton writes: You may be interested in a commentary piece I wrote early this year, which was published recently in the International Journal of Epidemiology, where I discuss your work on identifying an aggregation bias in one of the key figures in Case & Deaton’s (in)famous 2015 paper on rising morbidity and mortality in […]

Fitting multilevel models when predictors and group effects correlate

Ryan Bain writes: I came across your ‘Fitting Multilevel Models When Predictors and Group Effects Correlate‘ paper that you co-authored with Dr. Bafumi and read it with great interest. I am a current postgraduate student at the University of Glasgow writing a dissertation examining explanations of Euroscepticism at the individual and country level since the […]

Noisy, heterogeneous data scoured from diverse sources make his metanalyses stronger.

Kyle MacDonald writes: I wondered if you’d heard of Purvesh Khatri’s work in computational immunology, profiled in this Q&A with Esther Landhuis at Quanta yesterday. Elevator pitch is that he believes noisy, heterogeneous data scoured from diverse sources make his metanalyses stronger. The thing that gave me the woollies was this line: “We start with […]

What I missed on fixed effects (plural).

In my [Keith] previous post that criticised a publish paper, the first author commented they wanted some time to respond and I agreed. I also suggested that if the response came in after most readers have moved on I would re-post their response as a new post pointing back to the previous. So here we are. […]

Using Mister P to get population estimates from respondent driven sampling

From one of our exams: A researcher at Columbia University’s School of Social Work wanted to estimate the prevalence of drug abuse problems among American Indians (Native Americans) living in New York City. From the Census, it was estimated that about 30,000 Indians live in the city, and the researcher had a budget to interview […]

The Publicity Factory: How even serious research gets exaggerated by the process of scientific publication and reporting

The starting point is that we’ve seen a lot of talk about frivolous science, headline-bait such as the study that said that married women are more likely to vote for Mitt Romney when ovulating, or the study that said that girl-named hurricanes are more deadly than boy-named hurricanes, and at this point some of these […]