Trying to make some sense of it all, but I can see it makes no sense at all . . . stuck in the middle with you

“Mediation analysis” is this thing where you have a treatment and an outcome and you’re trying to model how the treatment works: how much does it directly affect the outcome, and how much is the effect “mediated” through intermediate variables. Fabrizia Mealli was discussing this with me the other day, and she pointed out that the “direct effect” is defined only relative to a model: the direct effect of a treatment can be thought of as a residual effect, after considering all the other pathways being considered. This is not the same as simply fitting a multiple regression on the outcome and looking at the coefficient of the treatment, controlling for all the intermediate variables: that won’t work at all, but there are these methods such as path analysis or mediation analysis that can fit these models, under some assumptions.

In the real world, it’s my impression that almost all the mediation analyses that people actually fit in the social and medical sciences are misguided: lots of examples where the assumptions aren’t clear and where, in any case, coefficient estimates are hopelessly noisy and where confused people will over-interpret statistical significance (see here, for example).

So it’s natural to take what would seem to be a conservative position and forget about mediation analysis, taking the fallback position of intent-to-treat analysis or, more generally, just trying to estimate treatment effects without untangling causal pathways. And indeed that’s pretty much what I’ve done, as you can see in the causal inference chapters in our books. In specific applications I’ve worked with particular causal mechanisms, but I’ve not tried to use general techniques for mediation analysis.

But . . . more and more I’ve been coming to the conclusion that the standard causal inference paradigm is broken. I’m talking about the paradigm under which a researcher dreams up an idea for a treatment and then designs a study, collects data, and estimates “the treatment effect.” It ain’t working: nowadays, treatment effects are small and variable, not large and stable (the low-hanging fruit have already been plucked). Our little experiments don’t have enough data to allow us to estimate real-world treatment effects and their variation; at the same time, we’re not efficiently using data to come up with our treatments. On both grounds, I think the way forward has to involve intermediate outcomes and modeling/estimation of causal paths.

So how to do it? I don’t think traditional path analysis or other multivariate methods of the throw-all-the-data-in-the-blender-and-let-God-sort-em-out variety will do the job. Instead we need some structure and some prior information.

A good starting point might be the literature on causal inference for time series (longitudinal data, as they call it in biostatistics), as such models have built-in structure. Also examples such as compliance where there are some natural constraints that can be assumed on the casual relationships.

Fabrizia Mealli pointed me to some recent papers:

Bayesian inference for causal mechanisms with application to a randomized study for postoperative pain control

Identification and Estimation of Causal Mechanisms in Clustered Encouragement Designs: Disentangling Bed Nets Using Bayesian Principal Stratification

A Bayesian Semiparametric Approach to Intermediate Variables in Causal Inference

Augmented designs to assess principal strata direct effects

19 thoughts on “Trying to make some sense of it all, but I can see it makes no sense at all . . . stuck in the middle with you

  1. “Trying to make some sense of it all, but I can see it makes no sense at all . . . stuck in the middle with you”

    A critical post (“trying to make sense of it all, but i can see it makes no sense at all”) about mediation analysis (“stuck in the middle”) using the above title: + 1

    Picking lyrics for the blogpost title from a song i actually like on top of that: +2

    “Stuck in the middle with you” – Stealers Wheel

    https://www.youtube.com/watch?v=DohRa9lsx0Q

  2. I have considered this issue repeatedly as an editor. It is almost to the point where I will not accept any conclusion from mediation analysis, unless it is viewed as purely exploratory, etc.

    I tried to think through some of the issues in my own very naive way, and the result is here:
    http://judgmentmisguided.blogspot.com/2016/06/alternatives-to-mediation-in-data.html

    (I should note that some of my concern was inspired by a retraction of a paper that relied on a mediation analysis, several years ago:
    https://retractionwatch.com/category/by-journal/judgment-and-decision-making/ .)

  3. Strongly recommend Judea Pearl’s recent readable book. He gives a rigorous account of both direct and indirect effects. Also, we should always (to the extent possible) be thinking about mediators, because science progresses via identification of mechanisms. This is not to defend common mediation analysis. (Pearl doesn’t.) But we should draw more causal diagrams with mediators than we do.

    • I agree that Pearl had this figured out a while ago, but not enough people in social sciences and elsewhere listened. And it is harder to understand mediation in the potential outcomes framework, although you can write down equations about it.

      If the key problem is that the total treatment effect estimates are noisy, then the fact that the total causal effect can be written as the difference between a natural direct effect and a natural indirect effect suggests that we should be trying to estimate both natural direct effects and natural indirect effects and rely on Rao-Blackwellization to obtain a more precise estimate of the total causal effect.

  4. Thanks for posting these papers. As a social psychologist, I find them especially helpful. As we as a field, I think we know that mediation analyses are generally poorly done, yet they seem to be huge asset in publishing in JPSP – the field’s flagship publication. Some mediation techniques by Preacher and others have really caught on – here is a 2010 publication on multilevel mediation that’s already been cited over 1,100 times (http://psycnet.apa.org/buy/2010-18042-001) – but it would be good to know how these analyses can be conducted more rigorously.

  5. The term “direct effects” is a misnomer, as I’ve told anyone who will listen.

    Really, it just means “the portion of the treatment effect that’s mediated by some other mediator.” So it’s only defined relative to the mediator(s) you’ve chosen to model, not in and of itself.

    I think that’s different, though, than saying it’s “defined only relative to a model,” since it can be defined without reference to a model, e.g. with potential outcomes.

    The need to pre-specify a given causal pathway before fitting any of these models–mediation analysis or principal stratification–is one of a few big problems with the study of causal mechanisms as it stands, IMO.

    On another note, I found Lindsay Page’s paper in JREE (here) and the ensuing discussion–including push-back from Tyler VanderWeele–a really good introduction to this debate.

Leave a Reply

Your email address will not be published. Required fields are marked *