Causal inference conference at Columbia University on Sat 6 May: Varying Treatment Effects

Hey! We’re throwing a conference:

Varying Treatment Effects

The literature on causal inference focuses on estimating average effects, but the very notion of an “average effect” acknowledges variation. Relevant buzzwords are treatment interactions, situational effects, and personalized medicine. In this one-day conference we shall focus on varying effects in social science and policy research, with particular emphasis on Bayesian modeling and computation.

The focus will be on applied problems in social science.

The organizers are Jim Savage, Jennifer Hill, Beth Tipton, Rachael Meager, Andrew Gelman, Michael Sobel, and Jose Zubizarreta.

And here’s the schedule:

9:30 AM
1. Heterogeneity across studies in meta-analyses of impact evaluations.
– Michael Kremer, Harvard
– Greg Fischer, LSE
– Rachael Meager, MIT
– Beth Tipton, Columbia
10-45 – 11 coffee break

11:00
2. Heterogeneity across sites in multi-site trials.
– David Yeager, UT Austin
– Avi Feller, Berkeley
– Luke Miratrix, Harvard
– Ben Goodrich, Columbia
– Michael Weiss, MDRC

12:30-1:30 Lunch

1:30
3. Heterogeneity in experiments versus quasi-experiments.
– Vivian Wong, University of Virginia
– Michael Gechter, Penn State
– Peter Steiner, U Wisconsin
– Bryan Keller, Columbia

3:00 – 3:30 afternoon break

3:30
4. Heterogeneous effects at the structural/atomic level.
– Jennifer Hill, NYU
– Peter Rossi, UCLA
– Shoshana Vasserman, Harvard
– Jim Savage, Lendable Inc.
– Uri Shalit, NYU

5pm
Closing remarks: Andrew Gelman

Please register for the conference here. Admission is free but we would prefer if you register so we have a sense of how many people will show up.

We’re expecting lots of lively discussion.

P.S. Signup for outsiders seems to have filled up. Columbia University affiliates who are interested in attending should contact me directly.

20 thoughts on “Causal inference conference at Columbia University on Sat 6 May: Varying Treatment Effects

    • Brian:

      In Bayesian inference you’re estimating the entire distribution, so no need for special techniques for quantile regression. You can just fit the Bayesian model and then get inferences for quantiles if that’s what you want.

        • Ben:

          From wikipedia, I see that “quantile regression aims at estimating either the conditional median or other quantiles of the response variable.” If you fit a Bayesian model, you can compute these quantiles by simulating predictive values. That’s what I was talking about in my comment above.

        • My understanding is that the goal of quantile regression is to model the response variable quantiles as a function of covariates. The flexibility allows one to estimate how, say, the 75th %-ile of blood pressure changes with age, and how that relationship might be different if one is interested in the 90th %-ile of blood pressure. I have only done this with an asymmetric Laplace likelihood, but I haven’t done it very often.

        • Garnett:

          Sure, but with a Bayesian model you’re simultaneously modeling all the quantiles. I guess I can see there being a niche for certain specialized methods applied just to quantiles, but I certainly don’t see this topic as so central that we “ought to have something” on it!

        • We ought to have (something equivalent in distribution to) the asymmetric Laplace likelihood conditional on the p-th quantile of interest in rstanarm so that people can draw from that posterior distribution, which is different from using a Gaussian likelihood, drawing from the posterior distribution, drawing from the predictive distribution, and then looking at quantiles.

        • Ben, you can fit a 2D Dirichlet process mixture model and compute the posterior expectation of the conditional quantiles.

        • Ben,

          It looks like v1.4.0 of brms introduced quantile regression “Fit quantile regression models via family asym_laplace (asymmetric Laplace distribution).” But that was done by Paul implementing an asymmetric Laplace distribution on his end and not in the underlying RSTAN code?

      • But quantile regression (QR) does not require making untenable distributional assumptions about the response distribution as would be used by most Bayesian modeling approaches. QR allows one to estimate the empirical conditional cumulative distribution function, where the conditioning is done on combinations (linear or more complex) of the predictor variables. Heterogeneity pops out both in the regression coefficients (rate of change in the conditional cdf) and in the predicted responses.

        • Brian:

          All methods require making untenable assumptions. So-called nonparametric methods work by pooling in some way, thus assuming constancy or additivity or some similar assump. The method you describe could well be useful, but it makes assumps, no doubt about that.

    • Kaiser:

      1. See P.S. above.

      2. I guess we should’ve followed basic principles of econ and charged a nominal $10 admissions fee so as to restrict to people who’d be likely to show up.

Leave a Reply to Andrew Cancel reply

Your email address will not be published. Required fields are marked *