An impact evaluation relies on rigorous methods to determine the changes in outcomes which can be attributed to a specific intervention based on cause-and-effect analysis. Impact evaluations need to account for the counterfactual – what would have occurred without the intervention through the use of an experimental or quasi-experimental design using comparison and treatment groups.
Purpose of Impact Evaluation
Impact evaluations often serve an accountability purpose to determine if and how well a program worked. Impact Evaluations can also help answer program design questions to determine which, among several alternatives, is the most effective approach.
When to do an Impact Evaluation
The World Bank has developed the following guidelines for determining when an impact evaluation may be useful1:
- If it is an innovative intervention scheme, such as a pilot program.
- If the intervention is to be scaled up or replicated in a different setting.
- If the intervention is strategically relevant and will require a great deal of resources.
- If the intervention is untested.
- If the intervention results will influence key policy decisions.
Design of Impact Evaluations
There are many design options for impact evaluations depending on a variety of factors (see USAID Technical Note and OECD Principles for Impact Evaluation for helpful decision trees for design). However, all designs generally fall into either an experimental or quasi-experimental design through the use of a counterfactual, an alternate control group generated by random selection in the case of experimental designs, or a comparison group which is not randomized in the case of a quasi-experimental design.
Common Experimental Designs include the following:
- Randomized Controlled Trial (RCT)
- Randomized Offering of Intervention
- Randomized Promotion of Intervention
- Multiple Treatment Design
Common Quasi-Experimental designs include the following:
- Difference-in-differences
- Matched Comparisons
- Regression Discontinuity
- Interrupted Time Series
Impact evaluations are not appropriate for every intervention scheme, as such evaluations require certain data sets and financial resources that may not be available. Evaluators usually work in conjunction with project planners and donors when proposing to conduct impact evaluation.
See the following list of resources for a more detailed and in-depth discussion of impact evaluation and available designs.
List of Resources
InterAction, Impact Evaluation Guidance Notes
OECD Principles of Impact Evaluation
World Bank (2011) Impact Evaluation in Practice
References:
1 World Bank, Impact Evaluation in Practice http://siteresources.worldbank.org/EXTHDOFFICE/Resources/5485726-1295455628620/Impact_Evaluation_in_Practice.pdf
About the Author
Kirsten Bording Collins is an experienced evaluation specialist providing consulting services in program evaluation, planning and project management. She has over ten years of combined experience in the nonprofit, NGO and public sectors working both in the U.S. and internationally. Kirsten's areas of expertise include: program evaluation, planning, project management, evaluation training and capacity-building, mixed-methods, qualitative analysis, and survey design. Kirsten holds a MA in International Administration from the Korbel School of International Studies, University of Denver. Kirsten grew up in Copenhagen, Denmark and currently lives in Washington, DC.
Connect with Kirsten on LinkedIn.
________
To learn more about American University’s online Graduate Certificate in Project Monitoring and Evaluation, request more information or call us toll free at 855-725-7614.