Randomized evaluation is not without its critics, who say that there is little benefit in learning rigorously about one context because the lessons will not translate into other contexts. In effect, the critics ask, can we generalize about human behavior across countries? That is an empirical question, and the growing body of evidence is helping us answer it scientifically. Hundreds of randomized evaluations of anti-poverty programs are now being conducted all over the world. While each evaluation is carefully crafted to describe one part of the development puzzle, many pieces are starting to come together.
Above Glennerster and Kremer on the Boston Review. A little exercise of logic. Why random is considered to be good? Because only random samples of population are identical ex ante – before the treatment is given. Then, if we take all the RCTs implemented so far, assuming the randomization was successfully implemented, we could argue that all that random samples are representative of the world population. Ergo, there wouldn’t be any reason not to think they cannot be generalised and applied to the whole human kind. I know randomists are going to blame on me. It was just to play! In fact, the invisible (random) hand acts at all levels. Going into philosophy/sense of life/theology, we could say that all individuals are potentially, in a random sample, identical. That is, they would respond identically to identical situations. There is an IF: if the random god didn’t intervene in defining different random settings that tend to infinity. Randomly, people were born in a war context, in an area subjected to tsunamis, in a country with corrupted elites or with an extremely rich and powerful, democratically elected, prime minister, in a Christian or Islamic family, etc.. Coming back to reality, the goal of a responsible generalization from RTCs should undress the experiments from the effect of different random states of the world. Because, in a world characterised by one context only, individuals would behave randomly (boringly?) in the same way. I guess Chris Blattman’s Impact Evaluation 2.0 and 3ie systematic reviews are seeking to do that.