Skip to content
Archive of posts filed under the Causal Inference category.

“She also observed that results from smaller studies conducted by NGOs – often pilot studies – would often look promising. But when governments tried to implement scaled-up versions of those programs, their performance would drop considerably.”

Robert Wiblin writes: If we have a study on the impact of a social program in a particular place and time, how confident can we be that we’ll get a similar result if we study the same program again somewhere else? Dr Eva Vivalt . . . compiled a huge database of impact evaluations in […]

Matching (and discarding non-matches) to deal with lack of complete overlap, then regression to adjust for imbalance between treatment and control groups

John Spivack writes: I am contacting you on behalf of the biostatistics journal club at our institution, the Mount Sinai School of Medicine. We are working Ph.D. biostatisticians and would like the opinion of a true expert on several questions having to do with observational studies—questions that we have not found to be well addressed […]

Debate about genetics and school performance

Jag Bhalla points us to this article, “Differences in exam performance between pupils attending selective and non-selective schools mirror the genetic differences between them,” by Emily Smith-Woolley, Jean-Baptiste Pingault, Saskia Selzam, Kaili Rimfeld, Eva Krapohl, Sophie von Stumm, Kathryn Asbury, Philip Dale, Toby Young, Rebecca Allen, Yulia Kovas, and Robert Plomin, along with this response […]

A potential big problem with placebo tests in econometrics: they’re subject to the “difference between significant and non-significant is not itself statistically significant” issue

In econometrics, or applied economics, a “placebo test” is not a comparison of a drug to a sugar pill. Rather, it’s a sort of conceptual placebo, in which you repeat your analysis using a different dataset, or a different part of your dataset, where no intervention occurred. For example, if you’re performing some analysis studying […]

What to do when your measured outcome doesn’t quite line up with what you’re interested in?

Matthew Poes writes: I’m writing a research memo discussing the importance of precisely aligning the outcome measures to the intervention activities. I’m making the point that an evaluation of the outcomes for a given intervention may net null results for many reasons, one of which could simply be that you are looking in the wrong […]

Don’t get fooled by observational correlations

Gabriel Power writes: Here’s something a little different: clever classrooms, according to which physical characteristics of classrooms cause greater learning. And the effects are large! Moving from the worst to the best design implies a gain of 67% of one year’s worth of learning! Aside from the dubiously large effect size, it looks like the […]

Discussion of effects of growth mindset: Let’s not demand unrealistic effect sizes.

Shreeharsh Kelkar writes: As a regular reader of your blog, I wanted to ask you if you had taken a look at the recent debate about growth mindset [see earlier discussions here and here] that happened on theconversation.com. Here’s the first salvo by Brooke McNamara, and then the response by Carol Dweck herself. The debate […]

The gaps between 1, 2, and 3 are just too large.

Someone who wishes to remain anonymous points to a new study of David Yeager et al. on educational mindset interventions (link from Alex Tabarrok) and asks: On the blog we talk a lot about bad practice and what not to do. Might this be an example of how *to do* things? Or did they just […]

John Hattie’s “Visible Learning”: How much should we trust this influential review of education research?

Dan Kumprey, a math teacher at Lake Oswego High School, Oregon, writes: Have you considered taking a look at the book Visible Learning by John Hattie? It seems to be permeating and informing reform in our K-12 schools nationwide. Districts are spending a lot of money sending their staffs to conferences by Solution Tree to […]

When anyone claims 80% power, I’m skeptical.

A policy analyst writes: I saw you speak at ** on Bayesian methods. . . . I had been asked to consult on a large national evaluation of . . . [details removed to preserve anonymity] . . . and had suggested treading carefully around the use of Bayesian statistics in this study (basing it […]

Let’s be open about the evidence for the benefits of open science

A reader who wishes to remain anonymous writes: I would be curious to hear your thoughts on is motivated reasoning among open science advocates. In particular, I’ve noticed that papers arguing for open practices have seriously bad/nonexistent causal identification strategies. Examples: Kidwell et al. 2017, Badges to Acknowledge Open Practices: A Simple, Low-Cost, Effective Method […]

China air pollution regression discontinuity update

Avery writes: There is a follow up paper for the paper “Evidence on the impact of sustained exposure to air pollution on life expectancy from China’s Huai River policy” [by Yuyu Chen, Avraham Ebenstein, Michael Greenstone, and Hongbin Li] which you have posted on a couple times and used in lectures. It seems that there […]

Data-based ways of getting a job

Bart Turczynski writes: I read the following blog with a lot of excitement: Then I reread it and paid attention to the graphs and models (which don’t seem to be actual models, but rather, well, lines.) The story makes sense, but the science part is questionable (or at least unclear.) Perhaps you’d like to have […]

He wants to know what to read and what software to learn, to increase his ability to think about quantitative methods in social science

A law student writes: I aspire to become a quantitatively equipped/focused legal academic. Despite majoring in economics at college, I feel insufficiently confident in my statistical literacy. Given your publicly available work on learning basic statistical programming, I thought I would reach out to you and ask for advice on understanding modeling and causal inference […]

About that claim in the NYT that the immigration issue helped Hillary Clinton? The numbers don’t seem to add up.

Today I noticed an op-ed by two political scientists, Howard Lavine and Wendy Rahm, entitled, “What if Trump’s Nativism Actually Hurts Him?”: Contrary to received wisdom, however, the immigration issue did not play to Mr. Trump’s advantage nearly as much as commonly believed. According to our analysis of national survey data from the American National […]

Trying to make some sense of it all, but I can see it makes no sense at all . . . stuck in the middle with you

“Mediation analysis” is this thing where you have a treatment and an outcome and you’re trying to model how the treatment works: how much does it directly affect the outcome, and how much is the effect “mediated” through intermediate variables. Fabrizia Mealli was discussing this with me the other day, and she pointed out that […]

About that quasi-retracted study on the Mediterranean diet . . .

Some people asked me what I thought about this story. A reporter wrote to me about it last week, asking if it looked like fraud. Here’s my reply: Based on the description, there does not seem to be the implication of fraud. The editor’s report mentioned “protocol deviations, including the enrollment of participants who were […]

Some experiments are just too noisy to tell us much of anything at all: Political science edition

Sointu Leikas pointed us to this published research article, “Exposure to inequality affects support for redistribution.” Leikas writes that “it seems to be a really apt example of “researcher degrees of freedom.’” Here’s the abstract of the paper: As the world’s population grows more urban, encounters between members of different socioeconomic groups occur with greater […]

How to reduce Type M errors in exploratory research?

Miao Yu writes: Recently, I found this piece [a news article by Janet Pelley, Sulfur dioxide pollution tied to degraded sperm quality, published in Chemical & Engineering News] and the original paper [Inverse Association between Ambient Sulfur Dioxide Exposure and Semen Quality in Wuhan, China, by Yuewei Liu, published in Environmental Science & Technology]. Air […]

Does “status threat” explain the 2016 presidential vote?

Steve Morgan writes: The April 2018 article of Diana Mutz, Status Threat, Not Economic Hardship, Explains the 2016 Presidential Vote, was published in the Proceedings of the National Academy of Sciences and contradicts prior sociological research on the 2016 election. Mutz’s article received widespread media coverage because of the strength of its primary conclusion, declaimed […]