Skip to content
Archive of posts filed under the Causal Inference category.

Is Rigor Contagious? (my talk next Monday 4:15pm at Columbia)

Is Rigor Contagious? Much of the theory and practice of statistics and econometrics is characterized by a toxic mixture of rigor and sloppiness. Methods are justified based on seemingly pure principles that can’t survive reality. Examples of these principles include random sampling, unbiased estimation, hypothesis testing, Bayesian inference, and causal identification. Examples of uncomfortable reality […]

Cloak and dagger

[cat picture] Elan B. writes: I saw this JAMA Pediatrics article [by Julia Raifman, Ellen Moscoe, and S. Bryn Austin] getting a lot of press for claiming that LGBT suicide attempts went down 14% after gay marriage was legalized. The heart of the study is comparing suicide attempt rates (in last 12 months) before and after exposure — gay […]

Looking for rigor in all the wrong places (my talk this Thursday in the Columbia economics department)

[cat picture] Looking for Rigor in All the Wrong Places What do the following ideas and practices have in common: unbiased estimation, statistical significance, insistence on random sampling, and avoidance of prior information? All have been embraced as ways of enforcing rigor but all have backfired and led to sloppy analyses and erroneous inferences. We […]

Vine regression?

Jeremy Neufeld writes: I’m an undergraduate student at the University of Maryland and I was recently referred to this paper (Vine Regression, by Roger Cooke, Harry Joe, and Bo Chang), also an accompanying summary blog post by the main author) as potentially useful in policy analysis. With the big claims it makes, I am not […]

Storytelling as predictive model checking

[cat picture] I finally got around to reading Adam Begley’s biography of John Updike, and it was excellent. I’ll have more on that in a future post, but for now I just went to share the point, which I’d not known before, that almost all of Updike’s characters and even the descriptions and events in […]

When do protests affect policy?

Gur Huberman writes that he’s been wondering for many years about this question: One function of protests is to vent out the protesters’ emotions. When do protests affect policy? In dictatorships there are clear examples of protests affecting reality, e.g., in Eastern Europe in 1989. It’s harder to find such clear examples in democracies. And […]

Quantifying uncertainty in identification assumptions—this is important!

Luis Guirola writes: I’m a poli sci student currently working on methods. I’ve seen you sometimes address questions in your blog, so here is one in case you wanted. I recently read some of Chuck Manski book “Identification for decision and prediction”. I take his main message to be “The only way to get identification […]

Come and work with us!

Stan is an open-source, state-of-the-art probabilistic programming language with a high-performance Bayesian inference engine written in C++. Stan had been successfully applied to modeling problems with hundreds of thousands of parameters in fields as diverse as econometrics, sports analytics, physics, pharmacometrics, recommender systems, political science, and many more. Research using Stan has been featured in […]

Stan is hiring! hiring! hiring! hiring!

[insert picture of adorable cat entwined with Stan logo] We’re hiring postdocs to do Bayesian inference. We’re hiring programmers for Stan. We’re hiring a project manager. How many people we hire depends on what gets funded. But we’re hiring a few people for sure. We want the best best people who love to collaborate, who […]

No evidence of incumbency disadvantage?

Several years ago I learned that the incumbency advantage in India was negative! There, the politicians are so unpopular that when they run for reelection they’re actually at a disadvantage, on average, compared to fresh candidates. At least, that’s what I heard. But Andy Hall and Anthony Fowler just wrote a paper claiming that, no, […]

Problems with “incremental validity” or more generally in interpreting more than one regression coefficient at a time

Kevin Lewis points us to this interesting paper by Jacob Westfall and Tal Yarkoni entitled, “Statistically Controlling for Confounding Constructs Is Harder than You Think.” Westfall and Yarkoni write: A common goal of statistical analysis in the social sciences is to draw inferences about the relative contributions of different variables to some outcome variable. When […]

Problems with randomized controlled trials (or any bounded statistical analysis) and thinking more seriously about story time

In 2010, I wrote: As a statistician, I was trained to think of randomized experimentation as representing the gold standard of knowledge in the social sciences, and, despite having seen occasional arguments to the contrary, I still hold that view, expressed pithily by Box, Hunter, and Hunter (1978) that “To find out what happens when […]

Field Experiments and Their Critics

Seven years ago I was contacted by Dawn Teele, who was then a graduate student and is now a professor of political science, and asked for my comments on an edited book she was preparing on social science experiments and their critics. I responded as follows: This is a great idea for a project. My […]

About that claim in the Monkey Cage that North Korea had “moderate” electoral integrity . . .

Yesterday I wrote about problems with the Electoral Integrity Project, a set of expert surveys that are intended to “evaluate the state of the world’s elections” but have some problems, notably rating more than half of the U.S. states in 2016 as having lower integrity than Cuba (!) and North Korea (!!!) in 2014. I […]

Transformative treatments

Kieran Healy and Laurie Paul wrote a new article, “Transformative Treatments,” (see also here) which reminds me a bit of my article with Guido, “Why ask why? Forward causal inference and reverse causal questions.” Healy and Paul’s article begins: Contemporary social-scientific research seeks to identify specific causal mechanisms for outcomes of theoretical interest. Experiments that […]

Sorry, but no, you can’t learn causality by looking at the third moment of regression residuals

Under the subject line “Legit?”, Kevin Lewis pointed me to this press release, “New statistical approach will help researchers better determine cause-effect.” I responded, “No link to any of the research papers, so cannot evaluate.” In writing this post I thought I’d go further. The press release mentions 6 published articles so I googled the […]

You’ll have to figure this one out for yourselves.

So. The other day this following email comes in, subject line “Grabbing headlines using poor statistical methods,” from Clifford Anderson-Bergman:

How can time series information be used to choose a control group?

This post is by Phil Price, not Andrew. Before I get to my question, you need some background. The amount of electricity that is provided by an electric utility at a given time is called the “electric load”, and the time series of electric load is called the “load shape.” Figure 1 (which is labeled […]

OK, sometimes the concept of “false positive” makes sense.

Paul Alper writes: I know by searching your blog that you hold the position, “I’m negative on the expression ‘false positives.’” Nevertheless, I came across this. In the medical/police/judicial world, false positive is a very serious issue: $2 Cost of a typical roadside drug test kit used by police departments. Namely, is that white powder […]

How effective (or counterproductive) is universal child care? Part 2

This is the second of a series of two posts. Yesterday we discussed the difficulties of learning from a small, noisy experiment, in the context of a longitudinal study conducted in Jamaica where researchers reported that an early-childhood intervention program caused a 42%, or 25%, gain in later earnings. I expressed skepticism. Today I want […]