Part of designing a project intervention is articulating a theory of change. We first identify a societal problem, then we flesh out its causes, and we craft an intervention that addresses the larger problem we identified earlier. We take the potential causes of the problem into consideration when thinking through our assumptions related to the societal problem, and how our intervention will address it. This process allows us to develop a theory of change that articulates how our intervention activities address the societal problem, thereby anticipating certain outputs and outcomes. We measure those outputs and outcomes through indicators, and capture all of this through the creation of a Logical Framework, or LogFrame, that helps us to monitor our project and track its results.
What evidence do we need to support a claim that our intervention resulted in a particular outcome? That our intervention actually contributed to the resolution of a problem or caused a change? Can my evaluation design prove causation? What data do I need? What methods would I use to collect it? What qualitative methods might help to show causal mechanisms?
The research process itself can give us insight into these issues. Let’s take what is probably an unanswerable question for us as we read this: How many golf balls are up in the air at this very moment in the US state of Florida?
One of our first steps is to define what we mean by golf ball. Does our world of balls include genuine, plastic, oversize, or toy golf balls? How do we define up in the air? What if someone merely dropped a ball? Is it up in the air before it hits the ground?
Another step would be to think through the many scenarios that might cause a golf ball to be up in the air. I might assume that the majority of golf balls in the air are a result of people hitting the balls while at play on courses, but this is not going to capture all cases. I need to consider people that might be throwing a golf ball up in the air, or balls that might be up in the air for other reasons all together.
Satellite imagery might be the only tool that we can use to measure with any certainty the number of balls in the air outdoors. This data is not going to include those golf balls in the air indoors however. Assuming we are only interested in golf balls in the air outdoors, what if our budget does not allow us to purchase satellite images? Can I answer the question with any degree of certainty using other methods? What data do I need? What indicators are going to give me insight into how many golf balls are up in the air right now in Florida?
It would be helpful to know something about the state of Florida. The state attracts many vacationers given its warm weather and resorts. On any given day of the week, assuming the weather is favorable, people will likely be playing golf outside. Having knowledge of the population playing golf might help me to capture data related to the number of balls in the air. The state probably has some sort of register that will tell us how many public and private golf courses are in operation; and the courses themselves might have data about how many people golfed on a particular day and time, and if they golfed in pairs, triples, or as four balls. How accurate is this data likely to be? Is it going to be difficult to collect? Can I accurately estimate how many golf balls are up in the air based on this data?
Additional information might give me more insight into this. It might be helpful to know the sizes and capacity of the courses. I need to consider the season, the weather, the actual day, and the time of the day.
It is probably not feasible for me to collect data from every golf course in the state. I am likely not going to be able to interview those managing and playing on every golf course to gather data either. How am I going to sample the golf courses? Can I rely on the numerical data that the golf courses in the sample provide? How am I going to train people to carry out interviews? How can I ensure that they are recording data consistently and accurately?
Even if I have answered all of these questions, I have left out an entire population that might be playing on mini courses, in their backyards, or in other places. I have also left out balls that are up in the air, but not on golf courses.
This golf ball exercise helps to illustrate the complexities of research, defining and operationalizing the indicators that we use for measurement, and, of course, causation and causal mechanisms. As evaluators, we are constantly asking ourselves what kind of evidence we need to support a claim that our project has made a change. We aim to find ways to measure that change using indicators that are well defined and link directly back to our activities, outputs, and outcomes. We need to make sure that we are asking the right questions of the right people, using the right methods.
E. Jane Davidson (2005) tells us that causation is the nuts and bolts of evaluation. As evaluators, we want to be in a position to show that causal chain—to show that our intervention has resulted in a particular outcome.
This is where research and evaluation differ. As an academic researcher, my standards of proof would require my showing causation—a relationship between two or more variables. I would likely use a quantitative or mixed method approach that relies on statistical analysis to show causation. My design might even allow me to show the degree to which one variable affected another.
We are oftentimes not in a position to carry out an impact evaluation using an experimental design. We may not have the time, budget, or data available to conduct such an evaluation, or it may simply not be appropriate given our project’s parameters.
This does not mean, however, that we cannot make any claims regarding our project’s outputs and outcomes. Even though I may not be able to show causation through an evaluation design, I can certainly shed light on the causal mechanisms at play. I might use a mixed and multi method approach that triangulates data from multiple sources; and I might use qualitative methods to help explain results, and understand better project outputs. I might use available data from archives, agency records, previous evaluations and reports; and I might collect my own data in the field using a solid sampling strategy.
Evaluators take their designs very seriously, to ensure that they coincide with a project’s LogFrame and theory of change, using appropriate indicators that measure what we intend for them to measure. Evaluators should use a solid evaluation statement of work or performance monitoring plan that outlines strategies to deal with incomplete, aggregated, or inaccurate data.
E. Jane Davidson (2005, pp. 71-81) gives us common sense strategies that evaluators should consider when showing causal mechanisms in their work:
- Ask observers who are impacted, or who witness the impact, about their thoughts relating to the project’s results;
- Compare the project content to the outcome, to gauge if the intervention fits the outcome;
- Look for patterns that suggest one cause or another, while checking for alternative explanations;
- Check if the timing of the outcomes matches the timing of the intervention;
- Ensure that the dose relates to the response, that is, that the intervention activities would make since given the response or change that you see;
- Make comparisons with a comparison group that was not part of the project; and
- Identify underlying causal mechanisms that would make your case more, or perhaps, less, convincing.
For others to believe our claims regarding causal mechanisms, we need to be transparent in how we monitor and evaluate a project, including the methods that we use. Our monitoring and evaluation efforts need to be well planned, organized, and implemented systematically. We need to ensure validity through using an appropriate design that accurately represents the causal mechanisms at play. We need to consider the reliability of our data, taking careful consideration of how we collect and manage it.
References:
Michael Bamberger, Jim Rugh, and Linda Mabry, Real World Evaluation: Working Under Budget, Time, Data, and Political Constraints, Thousand Oaks: SAGE, 2012.
E. Jane Davidson, Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation, Thousand Oaks: SAGE, 2005.
About the Author:
Dr. Beverly Peters has more than twenty years of experience teaching, conducting qualitative research, and managing community development, microcredit, infrastructure, and democratization projects in several countries in Africa. As a consultant, Dr. Peters worked on EU and USAID funded infrastructure, education, and microcredit projects in South Africa and Mozambique. She also conceptualized and developed the proposal for Darfur Peace and Development Organization’s women’s crisis center, a center that provides physical and economic assistance to women survivors of violence in the IDP camps in Darfur. Dr. Peters has a Ph.D. from the University of Pittsburgh. Learn more about Dr. Peters.
To learn more about American University’s online MS in Measurement & Evaluation or Graduate Certificate in Project Monitoring & Evaluation, request more information or call us toll free at 855-725-7614.