So-called fixed and random effects

Someone writes:

I am hoping you can give me some advice about when to use fixed and random effects model. I am currently working on a paper that examines the effect of . . . by comparing states . . .

It got reviewed . . . by three economists and all suggest that we run a fixed effects model. We ran a hierarchial model in the paper that allow the intercept and slope to vary before and after . . . My question is which is correct? We have ran it both ways and really it makes no difference which model you run, the results are very similar. But for my own learning, I would really like to understand which to use under what circumstances. Is the fact that we use the whole population reason enough to just run a fixed effect model?

Perhaps you can suggest a good reference to this question of when to run a fixed vs. random effects model.

I’m not always sure what is meant by a “fixed effects model”; see my paper on Anova for discussion of the problems with this terminology:
http://www.stat.columbia.edu/~gelman/research/published/AOS259.pdf
Sometimes there is a concern about fitting multilevel models when there are correlations; see this paper for discussion of how to deal with this:
http://www.stat.columbia.edu/~gelman/research/unpublished/Bafumi_Gelman_Midwest06.pdf

The short answer to your question is that, no, the fact that you use the whole population should not determine the model you fit. In particular, there is no reason for you to use a model with group-level variance equal to infinity. There is various literature with conflicting recommendations on the topic (see my Anova paper for references), but, as I discuss in that paper, a lot of these recommendations are less coherent than they might seem at first.

7 thoughts on “So-called fixed and random effects

  1. A good clarification of the difference between the two modeling strategies is found in Cameron/Trivedi "Microeconometrics" p.716f (google books: http://bit.ly/f5ZoqK). See also Wooldridge "Econometric Analysis etc.". Basically the fixed-effect approach makes no distributional assumptions of the time-invariant error component, where the random-effects approach assumes a distribution. This leads in general to more "robust" but less efficient estimates for the fixed effect approach in comparison to random effects.

  2. Apologies to Andrew for using the terms fixed effects and random effects. I think I'm using them the way the economists would. It's just a shorthand for convenience here.

    So the short answer is that fixed effects models are typically "better" for causal inference. The longer answer is that they are often not as helpful as the economists think they are.

    It's true that in theory fixed effects models can control for certain kinds of unobserved heterogeneity in the data. So for instance if you have states measured over time and you include state fixed effects you can claim to be controlling for (in addition to whatever covariates you have included in your linear model) characteristics of the state (observed and unobserved) that don't change over the time period covered by the study.

    Random effects models on the other hand assume independence between these state-level terms and all the other predictors in the model (including the treatment variable). But it is exactly this association (between unobserved state characteristics and the treatment variable) that we are trying to correct for in the fixed effects model. So it's likely inappropriate to assume it away.

    Two caveats (or two that I'll list here — there are more). One is that fixed effects models can create terrible problems when we are using them in this "units over time" way because (to make a long story short) we can end up controlling for post-treatment variables which can create bias.

    The second is that the random effects model can be extended in various ways to account for this association in ways that might be helpful (see, for instance, so-called "correlated random effects models").

    There's a discussion of this in Gelman and Hill (in the multilevel causal chapter). Also Michael Sobel (Columbia Stats and Soc) has written a nice paper on this that hopefully will get published somewhere soon.

  3. Jennifer, Klaus:

    I think that what you're calling "fixed" effects makes a distributional assumption also–it's just that the assumption is that the group-level variance is infinity. I prefer to estimate the group-level variance from data. If you're worried about correlation with individual-level predictors, you can include group-level averages of the individual-level predictors in your model.

    I also recommend that people consider varying-slope models. Treatment effects varying by group can be important.

  4. The terminology is unfortunate but we are probably stuck with it. At least in econometrics, "random effects" means uncorrelated with the covariates and "fixed" means correlated hence the need for some transformation like differencing, orthogonal deviations etc.

  5. I would like to add that at least the three commentators seem to agree upon one definition of fixed and random effects. You could counter, that at least my version differs from the other two, but that is more a problem of me being too sloppy and inprecise than having a different understanding. I agree with the other two definitions, and I guess all three commentators would agree upon one single formal definition. But you are probably right that you can find other sources with a different formal definition of fixed and random effects.

  6. Klaus:

    It's not just that I'm "probably right" that I "can find other sources." I actually did find other sources, and I quoted their definitions. It's all my 2005 discussion paper in the Annals of Statistics (see link in the blog above).

    I encountered this problem because you a lot of different recommendations–from very authoritative sources in statistics–giving different recommendations of when to use "fixed" and "random" effects, and with different meanings for the terms. It's fine that the commenters on this blog can settle on a common definition, but that doesn't resolve for me the problem of huge ambiguity in the literature.

    This doesn't have to represent a problem for you–after all, you can do your research using whatever definitions work for you. But, for me, as a writer of books and methods articles, it's more of a concern, and I'm more aware of these definitional issues, which I've thought a lot about when writing BDA and ARM.

Comments are closed.