Skip to content
Archive of posts filed under the Causal Inference category.

“Edlin’s rule” for routinely scaling down published estimates

A few months ago I reacted (see further discussion in comments here) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults […]

My talks in Bristol this Wed and London this Thurs

1. Causality and statistical learning (Wed 12 Feb 2014, 16:00, at University of Bristol): Causal inference is central to the social and biomedical sciences. There are unresolved debates about the meaning of causality and the methods that should be used to measure it. As a statistician, I am trained to say that randomized experiments are […]

Keli Liu and Xiao-Li Meng on Simpson’s paradox

XL sent me this paper, “A Fruitful Resolution to Simpson’s Paradox via Multi-Resolution Inference.” I told Keli and Xiao-Li that I wasn’t sure I fully understood the paper—as usual, XL is subtle and sophisticated, also I only get about half of his jokes—but I sent along these thoughts: 1. I do not think counterfactuals or […]

Into the thicket of variation: More on the political orientations of parents of sons and daughters, and a return to the tradeoff between internal and external validity in design and interpretation of research studies

We recently considered a pair of studies that came out awhile ago involving children and political orientation: Andrew Oswald and Nattavudh Powdthavee found that, in Great Britain, parents of girls were more likely to support left-wing parties, compared to parents of boys. And, in the other direction, Dalton Conley and Emily Rauscher found with survey […]

Postdoc with Liz Stuart on propensity score methods when the covariates are measured with error

Liz Stuart sends this one along:

Judea Pearl overview on causal inference, and more general thoughts on the reexpression of existing methods by considering their implicit assumptions

This material should be familiar to many of you but could be helpful to newcomers. Pearl writes: ALL causal conclusions in nonexperimental settings must be based on untested, judgmental assumptions that investigators are prepared to defend on scientific grounds. . . . To understand what the world should be like for a given procedure to […]

San Fernando Valley cityscapes: An example of the benefits of fractal devastation?

I know we have some readers in the L.A. area and you might be interested in a comment on our recent post regarding the beneficial (in a Jane Jacobs sense) effects of selective devastation of micro-neighborhoods in a city. I gave the example of London after the fractal effects of bombing in WW2, and BMGM […]

2013

There’s lots of overlap but I put each paper into only one category.  Also, I’ve included work that has been published in 2013 as well as work that has been completed this year and might appear in 2014 or later.  So you can can think of this list as representing roughly two years’ work. Political […]

My talk at Leuven, Sat 14 Dec

Can we use Bayesian methods to resolve the current crisis of unreplicable research? In recent years, psychology and medicine have been rocked by scandals of research fraud. At the same time, there is a growing awareness of serious flaws in the general practices of statistics for scientific research, to the extent that top journals routinely […]

What predicts whether a school district will participate in a large-scale evaluation?

Liz Stuart writes: I am writing to solicit ideas for how we might measure a particular type of political environment, relevant to school districts’ likelihood of participating in federal evaluations (funded by the US Department of Education) of education programs. This is part of a larger project investigating external validity and the generalizability of results […]