Statistical modeling, causal inference, and social science

Interesting discussion by Berk Ozler (which I found following links from Tyler Cowen) of a study by Erwin Bulte, Lei Pan, Joseph Hella, Gonne Beekman, and Salvatore di Falco that compares two agricultural experiments, one blinded and one unblinded. Bulte et al. find much different results in the two experiments and attribute the difference to expectation effects (when people know they’re receiving an experiment they behave differently); Ozler is skeptical and attributes the different outcomes to various practical differences in implementation of the two experiments.

I’m reminded somehow of the notorious sham experiment on the dead chickens, a story that was good for endless discussion in my Bayesian statistics class last semester. I think we can all agree that dead chickens won’t exhibit a placebo effect. Live farmers, though, that’s another story.

I don’t have any stake in this particular fight, but on quick reading I’m sympathetic to Ozler’s argument that this all is well known and should be placed in a larger context of estimating complex treatment effects. In that context, and given the references Ozler gives at the end of his blog post, it does look like Bulte et al. are overselling the novelty of their claims.

Beyond all this, I think it’s an impressive aspect of the field of economics that economists are the ones having this discussion (in parallel with similar discussion by political scientists such as Don Green, Chris Blattman, and Macartan Humphreys). Agricultural experiments are a longstanding topic in statistics (the great mid-twentieth-century statisticians Fisher, Yates, Cochran, and Neyman all worked in this area), but somewhere along the way we statisticians have put more of our applied effort into other fields, including biology, medicine, environmental health, even political science! Meanwhile, in the field of agriculture, the economists seem to have picked up the slack. I’m sure the agricultural and development economists can still learn a lot about causal inference from statisticians such as Paul Rosenbaum, but my impression is that the statisticians who work in agriculture nowadays are more focused on technical issues such as spatial correlation than on behavior and causality.

What would be really great would be to get psychologists more involved in causal inference, especially in this sort of example in which there is so much speculation about motivation and decision making.

2 thoughts on “Statistical modeling, causal inference, and social science

  1. You might be underestimating statistical work in agriculture: http://bit.ly/Io1gGV

    Also double blind is not same as single blind. In fact the terminology is not useful (see CONSORT statement).

    Better to be specific about who is blind to what (e.g. in single blind, is it farmers or researchers that are blind to treatment status?).

    When everyone is aware of treatment status there are multiple possible effects e.g. a farmer placebo effect, a John Henry effect (controls may put less effort), an experimenter effect, an effect on data collectors, and so on.

    Dead chicken brains, like ESP, may react to a placebo. I can’t recall the exact reference but I am told of a study out there where dead salmon react to placebos in MRI scans. You just have to look hard enough :-)

    that there is an observer effect rather than a placebo effect

Comments are closed.