Originally posted on AidData's First Tranche here.
Thursday, May 9, 2013 The International Rescue Committee and researchers from Columbia University conducted an intensive assessment of Tuungane, a community driven reconstruction (CDR) program in the Democratic Republic of Congo (DRC). Tuungane organizes elections of village committees, as well as provides training in leadership, good governance, and social inclusion with the goal that local governments will be more accountable, efficient, transparent, and participatory. By nearly all measures, the program is massive: - Targeted beneficiary population: 1,780,000 people. - Budget for phase one: USD $46,309,000. - Geographic Distribution: 1000s of Kilometers. Evaluators used an impressively designed, rigorous and robust randomized intervention to assess the impacts of the program. Of the 34 outcome measures evaluated, only two were found to be statistically significant in the expected direction (willingness of the population to complain and to trust in others). Neither of the outcomes are significant at the 99% confidence level. And wonderfully, the evaluators pre-committed to an analysis plan and have stuck to it in their reporting. By most standards, these results would be pretty damaging to the community driven development (CDD) agenda. Unsurprisingly, and correctly so, it has led to calls for more randomized evaluations on the topic. This can be a good thing as replication of RCTs is crucial. Currently, the World Bank still supports 400 community driven development (a sister to CDRs) projects in 94 countries, valued at almost $30 billion. Thus more evidence should arrive soon. But how do we separate the push for more replication to identify the actual impact of CDD from efforts to continue to confirm previous biases?
4 Comments
Originally posted on AidData's First Tranche here.
Wednesday, July 3, 2013 Open data has generated a lot of buzz recently, prompting governments to make increasing amounts of data publicly accessible and catalyzing new partnerships with private sector and civil society actors around the use and reuse of this information. How much of the open data movement is flash versus substance? As an AidData Summer Fellow with the Center of Environmental and Agricultural Policy Research, Extension and Development (CEAPRED) in Nepal, I have witnessed firsthand that ensuring that open aid reporting programs are responsive to the real needs of citizens is key to maximizing their impact. My recent participation in Open Nepal’s Data Literacy Bootcamp underscores this point. On June 3rd and 4th, I participated in Open Nepal Week’s Data Literacy Bootcamp in Kathmandu, supported by the Open Aid Partnership. A coalition of organizations, including Freedom Forum, Young Innovations, NGO Federation of Nepal, Development Initiatives, and Development Gateway, trained with over 80 Nepali journalists, developers, coders, and civil society representatives to find, extract, and analyze public data. Participants learned how to consolidate data and create visualizations using Geographic Information Systems (GIS) platforms and Java applications. A 48-hour competition enabled participants to put their new knowledge to practical use as they conceptualized business plans for digital or web applications leveraging open data. Sponsors Google, the World Bank Institute, and African Media Initiative, envisioned the competition as empowering citizens to utilize open data. Varsha Upraity, a Research Officer at CEAPRED, proposed an idea for an app that would report on whether the user is meeting their daily nutritional requirements, and make recommendations for local market, clinics, and support groups. Engaging remote and severely disadvantaged communities proved to be an obstacle in planning Varsha’s app, which would be primarily web or SMS-based and thus inaccessible to many high-need areas. Inevitably, some users would be left out. This raised a critical question: how could we prevent this from happening? Designing a data-driven application responsive to the needs of even the most disadvantaged communities is a challenge that is not unique to Varsha’s experience. Transparent data reporting and visualizations offer a powerful way to inform the public about development progress. Yet, there are broader questions for policy makers and open data advocates regarding how the Open Data movement defines its goals and what success should look like. The answers are critical to informing the degree to which inclusivity and consideration of citizens factor into the creation of new apps. Experiences such as the Nepal Open Data Literacy Bootcamp remind us that open data and the applications it spawns can help build the capacity of citizens to track and evaluate information on their country’s development. It is equally apparent that merely releasing data or developing new data driven apps does not necessarily address issues of inclusivity, representation and participation across society. However, these issues remain largely unaddressed in spite of the rapid growth of the Open Data movement. I would submit that, as the Open Data movement evolves in Nepal and elsewhere, these issues should be front and center and inform the design of solutions that leverage both data and human capabilities. Madeline Clark is an IPD Graduate Research Affiliate and AidData Summer Fellow with the Center of Environmental and Agricultural Policy Research, Extension, and Development (CEAPRED) in Nepal. ![]() Originally posted on AidData's First Tranche here. Thursday, July 11, 2013 In a recent Brookings article, entitled “How Effective is the World Bank at Targeting Sub-National Poverty in Africa? A Foray into the Murky World of Geocoded Data,” Laurence Chandy, Natasha Ledlie and Veronika Penciakova, discuss the use of geocoded data to target aid at the sub-national level. Highlighting the World Bank’s Mapping for Results and IFPRI's (International Food Policy Institute) Harvest Choice data collection initiatives, the article explores the allocative efficiency of aid with respect to poverty at the first order administrative (i.e., province, state or governorate) level. AidData staff, students, and faculty also spend a lot of time collecting high-resolution subnational aid information to assess the targeting efficiency of aid, and along the way we have learned that is critical to map aid from a variety of sources, rather than a single donor, to fully understand aid distribution in any given country. Using a wealth of aid data from multiple donors in Malawi, we can build upon the analysis in the Brookings piece by examining how donors jointly distribute aid within a single country. The following map shows the locations of aid activities for all bilateral and multilateral donors in Malawi down to the second administrative level (i.e., district). In order to preliminarily assess the efficiency of aid targeting, we assume an ideal scenario where total aid is distributed sub-nationally according to the proportion of poor people residing in each district. For example, a district with 5% of Malawi’s poor should receive 5% of Malawi’s aid receipts. We use poverty headcounts and population data from Malawi’s Third Integrated Household Survey (IHS3) 2010-2011, to calculate these proportions and compare them to geocoded aid disbursements collected by CCAPS and AidData. Using this benchmark, we can identify how far the actual allocation of aid deviates from this “ideal”. If the difference for a given district is zero, then one might argue it is receiving resources appropriate to their portion of Malawi’s overall poverty burden. A significant positive or negative difference might reflect that a district is getting more or less than their fair share. The actual results show a relatively well-targeted, yet imperfect distribution of national aid resources. Poverty rates provide a poor proxy for targeting in Malawi because a large portion of the poor live in populated areas with lower poverty rates. This touches on Chandy’s dilemma – should we target aid to areas with higher numbers of poor people, higher proportions of poor people, or both? Are we more concerned with impacting as many lives as possible or reducing pockets of highly concentrated poverty? While these geocoded Malawi data represent a tremendous boon for aid transparency and aid effectiveness research, there are limitations to what we can learn from it. Donor project documents do not disaggregate total project funding amounts among project activity locations. Absent better reporting from donors, when project activities occurred in multiple districts, we assumed an equal distribution of resources across all locations. Yet, this is an imperfect description of where resources actually hit the ground and doesn’t convey differential aid impact per dollar spent across sectors and environmental contexts. We agree with Chandy that targeting should not be equated with proximity to the poor. Such a metric does not take into account other relevant contextual data -- on disease rates, past performance of development projects, the quality of local governance, climate change vulnerability, etc. -- that should presumably also inform aid allocation decisions. Geocoded project information supports more nuanced analysis, but merely generating more data is insufficient. In order for donors to be held accountable, the development community must respond to the increasingly availability of geocoded data by coalescing around a set of robust methodologies for assessing the quality of subnational aid targeting efforts. This post was written by Michael G. Findley, Josiah Marineau, Reid Porter, Jeanette Cunningham Rottas, and Kelly Steffen. Michael G. Findley is an Assistant Professor of Government at the University of Texas-Austin Josiah Marineau, Reid Porter, Jeanette Cunningham Rottas, and Kelly Steffen are Research Fellows at UT-Austin's Innovations for Peace and Development (IPD) research team. |
Categories
All
Archives
August 2019
Authors |