Qualitative Methods in Monitoring and Evaluation: The Philosophy of Science and Qualitative Methods

Traditionally, funders and implementing agencies have used quantitative approaches to project monitoring and evaluation. They also use quantitative techniques to analyze data, and show output and outcomes related to performance and outcome evaluations. Funders and implementing agencies have historically placed a high level of credence, validity, and reliability, on quantitative data collection and analysis techniques.

It is only more recently that evaluators have begun to highlight the gaps in knowledge that quantitative evaluation techniques might leave. Bamberger, Rugh, and Mabry (p. 296) tell us that combining quantitative and qualitative approaches gives evaluators “the breadth to explain trends, factors, and correlations; and depth to understand why trends occur, how factors operate, and what meanings may be attributed to the correlations.”

On a practical level, we might ask how we ensure that each of the qualitative methods we use is the correct method to collect the data that we want to collect. How can we ensure that we are using particular method in the right way? We should continually ask ourselves these two questions as we use qualitative methods to for measurement and evaluation.

However, we ask different questions on a philosophical level. Is qualitative data that is filtered through the senses of the researcher valid and reliable? This is a larger philosophical question that may take you back to your high school or college philosophy class. This debate relates to: “how we know what we know.” Is there a Truth or knowledge out there about our project that exists independent of our reality, and that we can measure quantitatively? Or is truth or knowledge about our project filtered through us as evaluators, and can we measure that qualitatively?

Bamberger, Rugh, and Mabry tell us that evaluators are at the forefront of mixed method approaches to research. We combine quantitative and qualitative approaches, seemingly not minding that we are combining different conceptualizations of Truth and truth. However, this does not mean that it is easy to integrate quantitative and qualitative methods, and analyze quantitative and qualitative data, into a coherent, valid, and reliable, evaluation. Finding meaningful ways to incorporate quantitative and qualitative approaches is a big challenge for evaluators, and one that we should consider carefully.

In order to address this, let’s take a step back and think about quantitative analysis, and how we conceptualize causation and draw conclusions. One main tenet within the positivist approach is that our conclusions are valid and reliable if we make observations or carry out experiments that can be backed up with numerical data. The observations we make are independent of our own biases and interpretations. As such, multiple researchers will draw the same conclusions making the same observations and carrying out experiments over time.

An evaluator who leans towards a positivist approach might:

  • collect quantitative data and use statistical analyses to draw conclusions.
  • set up an experimental or a quasi-experimental design to help see the impact of a project.
  • set up a hypothesis, alternative hypotheses, and dependent and independent variables that can be measured quantitatively.

For a long time a certain amount of prestige and validity was applied to supposedly objective quantitative methods that could explain and predict behavior. If we can prove something quantitatively, if we can craft a quantitative impact evaluation, this seemed to carry more weight with funding agencies and social scientists because we might be in a position to show causation. I think that this is one of the reasons that quantitative methods seemed to dominate monitoring activities and performance and impact evaluations for decades.

What does this mean for us as qualitative evaluators? I would argue that it is impossible for us to analyze project inputs so as to predict human behavior and guarantee results. Today, we see funding agencies and evaluators starting to embrace more qualitative and mixed method approaches. On one hand, we realize that quantitative methods are not going to explain everything for us. We are not going to understand the how and why around our project. We are not going to get an emic understanding. At the same time, we embrace that we may not be able to quantify all human behavior.

Just because qualitative data is important, does not mean that we will use every qualitative tool every time we collect data.

Just because we embrace the qualitative, does not mean that we have easy ways to integrate quantitative and qualitative data meaningfully.

It is not only a matter of learning different qualitative methods, it is a matter of figuring out under what conditions these different data collection methods are useful. We also must find systematic ways of analyzing qualitative data, so that we can make it useful for our monitoring efforts and performance and outcome evaluations.

These debates may seem philosophical, but they are important for us as we try to figure out how we generate knowledge, how we back up our claims, and how we draw conclusions regarding a project’s outputs, outcomes, and impact. This is a very important step as we develop our plans for project monitoring and evaluation.

References:
Michael Bamberger, Jim Rugh, and Linda Mabry, Real World Evaluation: Working Under Budget, Time, Data, and Political Constraints, Thousand Oaks: SAGE, 2012.
Michael Quinn Patton Qualitative Research and Evaluation Methods: Integrating Theory and Practice, Thousand Oaks: SAGE, 2014

About the Author:
Dr. Beverly Peters has more than twenty years of experience teaching, conducting qualitative research, and managing community development, microcredit, infrastructure, and democratization projects in several countries in Africa. As a consultant, Dr. Peters worked on EU and USAID funded infrastructure, education, and microcredit projects in South Africa and Mozambique. She also conceptualized and developed the proposal for Darfur Peace and Development Organization’s women’s crisis center, a center that provides physical and economic assistance to women survivors of violence in the IDP camps in Darfur. Dr. Peters has a Ph.D. from the University of Pittsburgh. Learn more about Dr. Peters.

To learn more about American University’s online MS in Measurement & Evaluation or Graduate Certificate in Project Monitoring & Evaluation, request more information or call us toll free at 855-725-7614.