Online Programs

Qualitative Methods in Monitoring and Evaluation: Thoughts Considering the Project Cycle

As we monitor and evaluate projects, we use many different kinds of qualitative methods, and each of these methods gives us different kinds of data.  Depending on our evaluation statement of work or performance monitoring plan, we use different methods on particular occasions to elicit certain kinds of data.

As we craft our qualitative or mixed method evaluation designs, we should consider what qualitative methods we would use, and what kind of data those methods would give us.  Evaluators have a large toolkit of qualitative methods, and we use each of these methods under different circumstances to gather different kinds of data.  As Nightengale and Rossman (2010) explain, we need to decide what our unit of analysis will be; the number of sites that we will use; how we will choose those sites; what data we need; and what method will give us that data.  We also need to consider Bamberger, Rugh, and Mabry's (2012) constraints of time, budget, data, and politics as we plan our qualitative research and evaluations. We should pay special attention to ethical considerations, as qualitative researchers tend to spend a lot of time with informants, gathering sensitive data in the process.

Let’s consider the use of several qualitative methods through the project cycle, from planning, to implementation, and project conclusion.  We should consider what qualitative methods we would use, and what kind of data those methods would give us. 

Planning
As we are planning our project, if we are lucky, a donor will give us money to carry out a needs assessment.  A quantitative needs assessment, perhaps even using already existing data, might tell us literacy rates or hospitalization rates, for example.  This kind of data can be important for our project, depending on its scope, objectives, and activities. 

A qualitative needs assessment might give us more of a disaggregated perspective of literacy or health issues that takes into account emic perspectives.  Observation might give us a picture of what is happening in the project setting.  Participant observation might give us more of an emic understanding of what is happening, especially if we are allowed into the backstage where the observer effect is no longer as evident.  At this stage, key informant interviews might give us some possible project parameters, and this might be of particular importance if there are gatekeepers in the community who could help or hinder a project and its activities.  Participatory tools like seasonal calendars might help us to understand the emic needs of the community, and the different local events or micropolitics issues that might impact project implementation and beneficiary access. 

Understanding the needs of the community is an important process, and with emic data we can construct projects and activities and set indicators that are culturally appropriate. 

Another aspect here is baseline data collection.  We sometimes collect this as we are planning our project, and we sometimes collect it just before we start our activities.  Collecting baseline data may be important if we want to be able to show outcomes or conduct an impact assessment after the conclusion of our project.  If we want to show the impact of our project, or the changes in people’s attitudes, behaviors, or competencies, then we may need a baseline to compare to.  Depending on our project, we might use a census table or a structured interview schedule to collect baseline data during the planning phase of a project.

Implementation
We incorporate qualitative data into our monitoring efforts and formative evaluations so that we can improve project activities. We adapt and learn from our project’s implementation when we carry out formative evaluations. 

Qualitative methods that monitor progress are particularly important during the implementation phase of a project.  Using qualitative data to monitor projects gives us insight into a project’s activities as they are being implemented.  This can be more helpful to us than quantitative data, such as “number of people trained.”  Indeed, one of the most common uses of qualitative data is to help explain or add perspective to quantitative data.  We can use qualitative data to tweak or change direction of our programming, especially if we are not hitting our intended objectives or making progress towards our indicators.

We use observation to see what is happening in our project, who is participating, and who is not participating.  We use participant observation and key informant interviews to understand what is happening in our project as it is being implemented.   Focus groups and participatory tools are also important for us so that we can get a wider perspective of project activities and outputs.  

Outcomes and Impact
Showing causation between the baseline and outcome data is something to consider in the design of an impact evaluation. Without that baseline data, we might not be in a position to show our project’s impact, so we need to think about collecting baseline data during the planning or at the start of the implementation phase if we want to show this later on.

As above, observation and participant observation allow us to observe and understand change that has or has not taken place in society as a result of our program.  Key informant interviews and focus groups give us insight into the change, or lack thereof.

Concluding Thoughts
While our evaluation designs need to be solid, we also need to have knowledge to implement the designs within other particular historical, cultural, and linguistic settings.  Our designs are only going to take us so far, and that we as evaluators need training and expertise to use qualitative methods in culturally appropriate ways.

References:
Michael Bamberger, Jim Rugh, and Linda Mabry, Real World Evaluation: Working Under Budget, Time, Data, and Political Constraints, Thousand Oaks: SAGE, 2012.
Demetra Smith Nightengale and Shelli Rossman, “Collecting Data in the Field,” in Joseph Wholey, Harry Hatry, and Kathryn Newcomer, eds., Handbook of Practical Program Evaluation, San Francisco: Wiley, 2010.

About the Author:
Dr. Beverly Peters has more than twenty years of experience teaching, conducting qualitative research, and managing community development, microcredit, infrastructure, and democratization projects in several countries in Africa. As a consultant, Dr. Peters worked on EU and USAID funded infrastructure, education, and microcredit projects in South Africa and Mozambique. She also conceptualized and developed the proposal for Darfur Peace and Development Organization’s women’s crisis center, a center that provides physical and economic assistance to women survivors of violence in the IDP camps in Darfur. Dr. Peters has a Ph.D. from the University of Pittsburgh. Learn more about Dr. Peters.

To learn more about American University’s online MS in Measurement & Evaluation or Graduate Certificate in Project Monitoring & Evaluation, request more information or call us toll free at 855-725-7614.