Measurement and evaluation are integral parts of my work as an international development professional. It is my experience that trying to understand “the why” or “the how” really helps me to understand an international development project’s outputs, outcomes, and when appropriate, impact. A mixed method approach that combines qualitative and quantitative data collection techniques gives international development professionals insight into what works and what does not work, and why. Such an approach helps us to spend limited funds more effectively, serving local project communities better, while ensuring accountability to donor agencies.
Many international development projects where I have worked do not have Logical Frameworks (LogFrames) or Evaluation Statements of Work (SOW) that frame a program theory of change as it relates to activities and indicators for measuring progress. For me, the theory of change is important as it helps us to investigate the root causes of the societal problem we are trying to address. When we understand the causes of the problem, we can craft better interventions to address it. Theory also helps us to articulate how our activities fit into that intervention, helping us to pinpoint how we measure progress and outcomes. All of this is fleshed out in a well-developed LogFrame and Evaluation SOW.
I once came into a project midstream, and the donor had not required a LogFrame or Evaluation SOW past a couple of paragraphs in the proposal that said “we will track progress using appropriate methods.” The implementing agency was only required to report on the number of people trained. There were no indicators and no valid way to measure progress towards any goal, besides the number of people that went to trainings. As we soon found, tracking number of people that attended training events was practically meaningless. Some participants played with their phones at the trainings, while others read the newspaper. In retrospect, it seems to be donor money wasted (gasp!) in part because program implementers had no real goal to reach—and no way to measure if they were reaching it.
This is unfortunately a very common scenario. Sometimes a donor does not require meaningful measurement and evaluation, while sometimes we get so wrapped up in our activities that we do not think how we are measuring progress or effectiveness. We do the same activities we have always done, assuming we are reaching a goal because we have trained so many people, to take from the example above. If donors do not demand more, we may not demand more of ourselves, especially if we do not have the internal knowledge and culture to implement and support measurement and evaluation processes.
To be more effective international development professionals, we need to have solid evaluation designs that incorporate both qualitative and quantitative data collection techniques, as appropriate to our evaluation framework. A formative evaluation can give project managers insight into program effectiveness midstream; as we monitor outputs, we can learn if a project is working or not. Perhaps more importantly, in such an evaluation, we can pinpoint how to change activities during implementation to meet goals and objectives, if necessary. Summative evaluations help us to understand program outcomes and impact after the close of a project. As international development professionals, this can be helpful as we craft future projects that take into account what worked and what did not work previously.
When my international development students ask for career advice, I recommend training in a number of areas, including (but not limited to) problem analysis, evaluation design, cross cultural communications, languages, and qualitative and quantitative methods. I also recommend practical measurement and evaluation experience, through volunteer work or coursework that requires practical assignments. When I teach monitoring, measurement and evaluation courses online, I always require that my students carry out practical assignments that provide professional opportunities that help build skills, in addition to student resumes.
About the Author
Dr. Beverly Peters has more than twenty years of experience teaching, conducting qualitative research, and managing community development, microcredit, infrastructure, and democratization projects in several countries in Africa. As a consultant, Dr. Peters worked on EU and USAID funded infrastructure, education, and microcredit projects in South Africa and Mozambique. She also conceptualized and developed the proposal for Darfur Peace and Development Organization’s women’s crisis center, a center that provides physical and economic assistance to women survivors of violence in the IDP camps in Darfur. Dr. Peters has a Ph.D. from the University of Pittsburgh. Learn more about Dr. Peters.
To learn more about American University’s online MS in Measurement & Evaluation or Graduate Certificate in Project Monitoring & Evaluation, request more information or call us toll free at 855-725-7614.