Chinmaya Holla
2 min readFeb 8, 2015

--

The Rabbit Hole of Too Much Evidence

I recently came across this fascinating argument by Gulzar Natarajan that we may be moving in the wrong direction with our RCT-ification of development problems. The central thesis is that there are multiple reasons for institutions to rely on their memory and knowledge to design solutions rather than employ a long-winded, expensive RCT for every issue. RCTs, he argues, are useful in edge cases where the organization needs to “tie up the loose ends”.

My main concern with RCTs has been the transition to scaling — what happens when the rigorously tested program with intensive support and monitoring is rolled out to settings that come with their own set of capacity constraints? RCTs usually come with the rider of limited scalability to universes dissimilar to the one in which the programs are evaluated. To what extent can these evaluations be financially justified in cash-starved public institutions of developing countries? Some of those concerns are mirrored in this piece. Lant Pritchett puts forth an additional point about, as he calls it, thin accountability vs thick accountability — the ability to tackle questions that address complex institutional mechanisms as opposed to easily identifiable objectives.

To clarify, I’m a staunch supporter of RCTs and believe that they’re the best thing to happen to calling bullshit on most developmental activities. They provide a powerful position to anchor any argument, amplifying weaker voices in many cases. However, I don’t think we’ve found an optimal solution to positioning RCTs in the larger development conversation.

PS: Here’s an interesting case study of RCTs not being able to predict externalities of programs it recommends.

--

--