Originally posted on AidData's First Tranche here.
Thursday, May 9, 2013
The International Rescue Committee and researchers from Columbia University conducted an intensive assessment of Tuungane, a community driven reconstruction (CDR) program in the Democratic Republic of Congo (DRC). Tuungane organizes elections of village committees, as well as provides training in leadership, good governance, and social inclusion with the goal that local governments will be more accountable, efficient, transparent, and participatory. By nearly all measures, the program is massive:
- Targeted beneficiary population: 1,780,000 people.
- Budget for phase one: USD $46,309,000.
- Geographic Distribution: 1000s of Kilometers.
Evaluators used an impressively designed, rigorous and robust randomized intervention to assess the impacts of the program. Of the 34 outcome measures evaluated, only two were found to be statistically significant in the expected direction (willingness of the population to complain and to trust in others). Neither of the outcomes are significant at the 99% confidence level. And wonderfully, the evaluators pre-committed to an analysis plan and have stuck to it in their reporting.
By most standards, these results would be pretty damaging to the community driven development (CDD) agenda. Unsurprisingly, and correctly so, it has led to calls for more randomized evaluations on the topic. This can be a good thing as replication of RCTs is crucial.
Currently, the World Bank still supports 400 community driven development (a sister to CDRs) projects in 94 countries, valued at almost $30 billion. Thus more evidence should arrive soon. But how do we separate the push for more replication to identify the actual impact of CDD from efforts to continue to confirm previous biases?