Lant Pritchett’s review: “Using ‘Random’ Right: New Insights from IDinsight Team”


Read Lant Pritchett’s review of IDinsight’s paper on Decision-Focused Evaluations, at the Center for Global Development blog.

Screenshot 2015-10-05 17.31.46 wp25.jpg__140x200_q80_crop_upscale

Commissioned by the Hewlett Foundation and published in the 3ie Working Paper Series

Read the paper and watch the full presentation on why and how we need to change impact evaluation in order for impact evaluations to reach their full potential to inform development action and improve social outcomes.

In the paper, we argue that to fully realize the potential of impact evaluation to improve social outcomes, we must approach impact evaluation as we would any other development intervention – how can we make it more cost effective in improving social outcomes?

With this lens, we argue in order to more effectively inform development action, impact evaluations must be adapted to serve as context-specific tools for decision-making that feed into local solution-finding systems. Towards this end, a new kind of impact evaluation has recently emerged, one that prioritises the implementer’s specific decision-making needs over potential contributions to a global body of knowledge. We call these ‘decision-focused evaluations’, evaluations that are driven by implementer demand, tailored to implementer needs and constraints and embedded within implementer structures. By reframing the primary evaluation objective, these ‘decision-focused evaluations’ allow implementers to generate and use rigorous evidence more quickly, more affordably and more effectively than ever before. [note: language directly borrowed from 3ie]

Here are our main arguments:

1. The field of rigorous impact evaluation in international development has been dominated by “knowledge-focused evaluations” to date, with a number of notable successes. However, a number of changes to the status quo have the potential to significantly increase the extent to which impact evaluations improve social outcomes.

2. In the future, we need to distinguish between “decision-focused evaluations” (DFEs) and “knowledge-focused evaluations” (KFEs):

a. DFEs are evaluations that are fundamentally demand-driven and responsive to the timelines, budgets and operational constraints of implementers – DFEs are deliberately designed to directly inform the scale-up decisions of funders and implementers.
b. KFEs are evaluations that are designed with a primary goal of contributing to a “global body of evidence” or “development theory”, usually through publication in an academic journal.
c. Both DFEs and KFEs are rigorous, counterfactual-based quantitative impact evaluations.

3. Need to greatly increase the number of “decision-focused evaluations”, as a tool to directly inform the decisions of a specific implementer in a specific geography at a specific time.

4. Use “knowledge-focused evaluations” where they add most value – when the primary objective is to contribute to development theory or when there is likely to be high external validity. Given the growing body of evidence around the low generalizability of impact evaluation evidence (e.g., Bold et. al. (2013); Pritchett and Sandefur (2014); Vivalt (2015)), the high cost and longer timelines of KFEs are likely to be best justified when the primary evaluation objective is to development theory. However, this means that KFEs are likely not appropriate if the primary objective is to guide a specific funding or implementation decision in a specific context.

5. Clear objectives, appropriate evaluation type – All parties involved in an evaluation (funder, evaluator and implementer) should explicitly agree from the outset on the primary objective of an evaluation, and then design the evaluation (including the extent to which it will have characteristics of KFE vs. DFE) accordingly.

View Dr. Neil Buddy Shah’s full presentation at 3ie’s Evidence Week, the presentation slides, and read the paper.