Skip to content

Causal inference conference at Columbia University on Sat 6 May: Varying Treatment Effects

Hey! We’re throwing a conference:

Varying Treatment Effects

The literature on causal inference focuses on estimating average effects, but the very notion of an “average effect” acknowledges variation. Relevant buzzwords are treatment interactions, situational effects, and personalized medicine. In this one-day conference we shall focus on varying effects in social science and policy research, with particular emphasis on Bayesian modeling and computation.

The focus will be on applied problems in social science.

The organizers are Jim Savage, Jennifer Hill, Beth Tipton, Rachael Meager, Andrew Gelman, Michael Sobel, and Jose Zubizarreta.

And here’s the schedule:

9:30 AM
1. Heterogeneity across studies in meta-analyses of impact evaluations.
– Michael Kremer, Harvard
– Greg Fischer, LSE
– Rachael Meager, MIT
– Beth Tipton, Columbia
10-45 – 11 coffee break

2. Heterogeneity across sites in multi-site trials.
– David Yeager, UT Austin
– Avi Feller, Berkeley
– Luke Miratrix, Harvard
– Ben Goodrich, Columbia
– Michael Weiss, MDRC

12:30-1:30 Lunch

3. Heterogeneity in experiments versus quasi-experiments.
– Vivian Wong, University of Virginia
– Michael Gechter, Penn State
– Peter Steiner, U Wisconsin
– Bryan Keller, Columbia

3:00 – 3:30 afternoon break

4. Heterogeneous effects at the structural/atomic level.
– Jennifer Hill, NYU
– Peter Rossi, UCLA
– Shoshana Vasserman, Harvard
– Jim Savage, Lendable Inc.
– Uri Shalit, NYU

Closing remarks: Andrew Gelman

Please register for the conference here. Admission is free but we would prefer if you register so we have a sense of how many people will show up.

We’re expecting lots of lively discussion.

P.S. Signup for outsiders seems to have filled up. Columbia University affiliates who are interested in attending should contact me directly.


  1. Brian Cade says:

    Seems like you ought to have something on quantile regression if you are going to discuss varying treatment effects and heterogeneity.

    • Andrew says:


      In Bayesian inference you’re estimating the entire distribution, so no need for special techniques for quantile regression. You can just fit the Bayesian model and then get inferences for quantiles if that’s what you want.

      • Ben Goodrich says:

        Those are not the quantiles you are looking for.

        • Andrew says:


          From wikipedia, I see that “quantile regression aims at estimating either the conditional median or other quantiles of the response variable.” If you fit a Bayesian model, you can compute these quantiles by simulating predictive values. That’s what I was talking about in my comment above.

          • AnonAnon says:

            Sorry I had the same question. So we can achieve the same goal without the hassle of coding in something like the Asymmetric Laplace distribution?


          • Garnett says:

            My understanding is that the goal of quantile regression is to model the response variable quantiles as a function of covariates. The flexibility allows one to estimate how, say, the 75th %-ile of blood pressure changes with age, and how that relationship might be different if one is interested in the 90th %-ile of blood pressure. I have only done this with an asymmetric Laplace likelihood, but I haven’t done it very often.

            • Andrew says:


              Sure, but with a Bayesian model you’re simultaneously modeling all the quantiles. I guess I can see there being a niche for certain specialized methods applied just to quantiles, but I certainly don’t see this topic as so central that we “ought to have something” on it!

              • Ben Goodrich says:

                We ought to have (something equivalent in distribution to) the asymmetric Laplace likelihood conditional on the p-th quantile of interest in rstanarm so that people can draw from that posterior distribution, which is different from using a Gaussian likelihood, drawing from the posterior distribution, drawing from the predictive distribution, and then looking at quantiles.

              • Corey says:

                Ben, you can fit a 2D Dirichlet process mixture model and compute the posterior expectation of the conditional quantiles.

              • AnonAnon says:


                It looks like v1.4.0 of brms introduced quantile regression “Fit quantile regression models via family asym_laplace (asymmetric Laplace distribution).” But that was done by Paul implementing an asymmetric Laplace distribution on his end and not in the underlying RSTAN code?

      • Brian Cade says:

        But quantile regression (QR) does not require making untenable distributional assumptions about the response distribution as would be used by most Bayesian modeling approaches. QR allows one to estimate the empirical conditional cumulative distribution function, where the conditioning is done on combinations (linear or more complex) of the predictor variables. Heterogeneity pops out both in the regression coefficients (rate of change in the conditional cdf) and in the predicted responses.

        • Andrew says:


          All methods require making untenable assumptions. So-called nonparametric methods work by pooling in some way, thus assuming constancy or additivity or some similar assump. The method you describe could well be useful, but it makes assumps, no doubt about that.

  2. Alex says:

    Are there any abstracts (or specific paper links) for the various talks?

  3. Adam says:

    This sounds incredibly fascinating but I can’t make it to the east coast that week. Any chance this will be recorded?

  4. LauraK says:

    wow… that filled fast.

  5. Kaiser says:

    It says “Sold out”. You need a larger room!

Leave a Reply