Dr. Beverly Peters has been active in community development for 25 years. Her areas of research interest center on village level political and economic development in Africa. She lived and worked in southern Africa for a decade, where she was involved in microcredit, microenterprise, governance, and public health projects, amongst others. Dr. Peters teaches Evaluation: Qualitative Methods in the online MS in Measurement & Evaluation.
As many evaluators will tell you, quantitative data collection techniques have dominated the evaluation field for decades. There are many reasons for this. Historically, researchers used quantitative data to understand societal problems, and as such offered solutions that could be measured quantitatively. We saw the use of quantitative data to understand and evaluate early interventions in public health and education. Part of this probably relates to arguments regarding the philosophy of science and the level of credence put on quantitative data collection and analysis methods that measure outcomes and impact—and can be generalized. Collecting numbers is relatively easy, and allows a donor or project implementer to track progress across many programs over space and time. On an operational perspective, collecting and reporting numbers seems so much easier and more economical than using qualitative data collection techniques, which can be time consuming and expensive to implement. For this reason, sometimes it is difficult to convince a donor or project implementer why you would need to use qualitative methods—why they are important when they are more time consuming and expensive, and yet cannot be generalized to a larger scale.
I strayed from this norm early in my career in community development. I embraced the use of qualitative, even ethnographic, methods in my work in Africa. I regularly use qualitative methods at every step of the project cycle, to help plan, monitor, and evaluate projects. I firmly believe that evaluators should use qualitative methods to gain an emic perspective of the project or intervention. The emic perspective is the perspective that the local population has of its own reality. Understanding the emic helps evaluators to appreciate the how and why around a project—how and why something is working or is not working. Qualitative methods have also helped me to understand quantitative data. This can give us insight into the planning, implementation, results, and even impact of a project.
Let’s take an example on a program that promotes handwashing before food preparation, aimed at limiting foodborne illnesses. On one level, quantitative data might tell me how many people received training in handwashing over the course of the program. On another level, surveys would help me to quantify handwashing habits before and after training. That data might give me insight into the effectiveness of a program—if we are to learn that, after training, people are more likely to wash their hands before preparing food. But what if the survey indicated that people were washing their hands before preparing food—yet people were still contracting foodborne illnesses? Participant observation might give me insight into what is happening, as it would allow me to observe if people’s actions, like handwashing, had actually changed. I might find that people’s actions are different from their survey responses.
Qualitative methods can undoubtedly add depth and understanding to an evaluation. However, I have also found that it is really difficult to use qualitative data collection techniques well, and meaningfully. It is not just a matter of adding an interview or a focus group to an otherwise quantitative evaluation. This might provide perspective, but it is not a valid, reliable way to do qualitative research.
One of the challenges of we have in qualitative research is operationalizing an approach that makes our data collection and analysis systematic. The most common way to approach data collection is through a Logical Framework, or LogFrame. LogFrames can be really helpful as we articulate a program theory of change that sheds light on the root causes of the societal problem we are trying to address. When we understand the causes of the problem, we can craft better interventions to address it. Our theory also helps us to articulate how our activities fit into that intervention, helping us to pinpoint how we measure progress and outcomes.
Oftentimes, as in the case of handwashing above, qualitative evaluation questions and data collection techniques are an important part of measuring progress and outcomes.
However, we also need to decide which qualitative methods to use to answer our evaluation questions. Although interviews dominate qualitative data collection, there are many, many qualitative methods at our disposal. Each gives us different kinds of data, and is appropriate under different circumstances. Deciding which qualitative data collection tool is most appropriate to answer our evaluation question or give insight into an indicator is key. We have lots of qualitative data collection methods in our toolbox, each appropriate to gather different kinds of data in different topical or cultural settings. Your challenge as a qualitative researcher and qualitative evaluator is to figure out what tool to use to collect the data that you need to support your research or evaluation design. A researcher or evaluator will use different tools depending on the research or evaluation need; not every tool will work for every research or evaluation purpose.
You also need to decide how to make sense of your qualitative data—or how to analyze it. Evaluators usually engage in thematic—descriptive and analytical—coding to help analyze qualitative data. I tend to use computer programs to help analyze large amounts of qualitative data.
Through my twenty-five years of community development experience in Africa, I have found that evaluators need a unique set of skills to collect qualitative data. As researchers, we need to be able to think on our feet and change the direction of our research, sometimes as we are collecting data from a respondent. More difficult to master are the people skills needed to collect qualitative data. Communication and building rapport are important, no matter what qualitative data collection tool we use. We need to be able to empathize, make connections with and read people, facilitate conversation, and above all—listen and observe. We must be able to function and be comfortable in new settings, and we need to make sure that our respondents feel comfortable and confident talking to us.
My advice to those new to the measurement and evaluation field is: Practice, practice, practice! The above research and communication skills all take practice over time to master. In addition to this, evaluators must construct solid evaluation designs to guide their monitoring and evaluation activities. When qualitative data collection techniques are integrated into these designs meaningfully, they give us an emic perspective that helps us to better plan, manage, and evaluate projects.
About the Author
Dr. Beverly Peters has more than twenty years of experience teaching, conducting qualitative research, and managing community development, microcredit, infrastructure, and democratization projects in several countries in Africa. As a consultant, Dr. Peters worked on EU and USAID funded infrastructure, education, and microcredit projects in South Africa and Mozambique. She also conceptualized and developed the proposal for Darfur Peace and Development Organization’s women’s crisis center, a center that provides physical and economic assistance to women survivors of violence in the IDP camps in Darfur. Dr. Peters has a Ph.D. from the University of Pittsburgh. Learn more about Dr. Peters.
To learn more about American University’s online MS in Measurement & Evaluation or online Graduate Certificate in Project Monitoring & Evaluation, request more information or call us toll free at 855-725-7614.