From Bannerjee and Duflo, “The Experimental Approach to Development Economics,” Annual Review of Economics (2009):
One issue with the explicit acknowledgment of randomization as a fair way to allocate the program is that implementers may find that the easiest way to present it to the community is to say that an expansion of the program is planned for the control areas in the future (especially when such is indeed the case, as in phased-in design).
I can’t quite figure out whether Bannerjee and Duflo are saying that they would lie and tell people that an expansion is planned when it isn’t, or whether they’re deploring that other people do it.
I’m not bothered by a lot of the deception in experimental research–for example, I think the Milgram obedience experiment was just fine–but somehow the above deception bothers me. It just seems wrong to tell people that an expansion is planned if it’s not.
P.S. Overall the article is pretty good. My only real problem with it is that when discussing data analysis, they pretty much ignore the statistical literature and just look at econometrics. In the long run, that’s fine—any relevant developments in statistics should eventually make their way over to the econometrics literature. But for now I think it’s a drawback in that it encourages a focus on theory and testing rather than modeling and scientific understanding.
Here are the titles of some of the cited papers:
Bootstrap tests for distributional treatment effects in instrumental variables models
Nonparametric tests for treatment effect heterogeneity
Testing the correlated random coefficient model
Asymptotics for statistical decision rules
Most of things in the paper, and most of the references, are applied rather than theoretical, so I’m not claiming that Bannerjee and Duflo are ivory-tower theorists. Rather, I’m suggesting that their statistical methods might not be allowing them to get the most out of their data–and that they’re looking in the wrong place when researching better methods. The problem, I think, is that they (like many economists) think of statistical methods not as a tool for learning but as a tool for rigor. So they gravitate toward math-heavy methods based on testing, asymptotics, and abstract theories, rather than toward complex modeling. The result is a disconnect between statistical methods and applied goals.