I have always been taught that the randomized experiment is the gold standard for causal inference, and I always thought this was a universal view. Not among all econometricians, apparently. In a recent paper in Sociological Methodology, James Heckman refers to “the myth that causality can only be determined by randomization, and that glorifies randomization as the ‘‘gold standard’’ of causal inference.”
It’s an interesting article because he takes the opposite position from all the statisticians I’ve ever spoken with (Bayesian or non-Bayesian). Heckman is not particularly interested in randomized experiments and does not see them as any sort of baseline, but he very much likes structural models, which statisticians are typically wary of because of their strong and (from a statistical perspective) nearly untestable assumptions. I’m sure that some of this dispute reflects different questions that are being asked in different fields.
Heckman’s article is a response to this article [link fixed–thanks Alex] by Michael Sobel, who argues that Heckman’s methods are actually not so different from the methods commonly used in statistics. It’s all a bit baffling to me because I actually thought that economists were big fans of randomized experiments nowadays.
P.S. As noted by an anonymous commenter, some controversy arose from this issue of Sociological Methodology, but I’m not going into detail here since said controversy is not very relevant to the scientific issues that arise in these papers, which is what I wanted to post on.