Interviews tend to dominate qualitative data collection for monitoring and evaluation. Project planners and evaluators conduct a variety of different kinds of qualitative interviews for many purposes, for example, to assess need, gauge opinions and perceptions regarding projects, or to get insight into outcomes and impact. Qualitative interview data can be a strong component of a project’s planning, implementation, or evaluation, especially when interviewing is part of a larger evaluation design that considers issues of sampling, validity, and reliability.
Tracy (2013) tells us that qualitative interviews provide opportunities for mutual discovery, understanding, reflection, and explanation, as the researcher speaks with individual respondents. Through qualitative interviews, we get emic viewpoints from our respondent’s perspective. Interviews can also strengthen and explain data collected by other means, both qualitative and quantitative.
It's all about structure
The literature uses different terminology to name and describe interview types. However, regardless of terminology, the overarching difference in interview type relates to the amount of structure in the interview questions and schedule.
In a truly structured interview, the wording, order, and potential answers of the questions are predetermined. This is more like an oral form of a written survey. Patton (2015) refers to such an interview as a closed, fixed response interview, and tells us that such an interview is useful when we want to compare and aggregate data in a short amount of time. Ultimately, we would probably construct such an interview schedule after we have already conducted participant observation or other interviews, as this would inform the questions and the potential answers. This type of interview is completely controlled by the evaluator.
On the other end of the spectrum is an unstructured interview, or what Patton (2015) refers to as an informal conversational interview. Tracy (2013) tells us that an unstructured interview is very flexible, and allows for the respondent to express viewpoints without the constraints of scripted questions. As such, an unstructured interview is more like a conversation than a traditional interview. It usually consists of open-ended statements or questions, where the interviewer lets the respondent control the flow of topics and subjects. As evaluators, we might use unstructured interviews when we do not know enough about a topic to ask relevant questions about it, or when a topic is very controversial so formulating questions around it would be challenging. We might also use the information we get from unstructured interviews to formulate questions for later interviews, or we might use stories from unstructured interviews to illustrate data or help explain quantitative results. Control in this type of interview rests more with the respondent than the evaluator.
In the middle of the spectrum would be a semi-structured interview, which Patton (2015) refers to as the interview guide approach. Such an interview would be guided by a schedule that likely includes a mix of potential questions and topics to be explored, but there is no predetermined wording or order of the questions. The interview allows for a great deal of flexibility, and the evaluator usually words previously thought out and devises new questions based on the experiences of the respondent. This kind of interview helps us to learn what the respondent thinks is most important, and to ask questions around that. In a qualitative evaluation, we might use such an interview when we want to gather opinions on a particular topic from a wide range of people, but we want to be in a position to tailor questions to the unique set of experience a respondent has to offer. We might use such a format when we want to speak to project managers, staff, or project recipients, for example. We only conduct semi-structured interviews when we know enough about a topic to be able to devise meaningful questions about it. Control in this type of interview tends to be shared by the evaluator and the respondent.
Monitoring & Evaluation Considerations
As qualitative researchers and evaluators, we need to decide how interviews fit into our data collection needs; that is, we should be able to explain easily what kind of data we can get from an interview, and why it is that a particular kind of interview with a particular person or set of people is the best way to get it. We need to decide when to use interviews, and what kind of interview to use.
Michael Quinn Patton, Qualitative Research and Evaluation Methods, 4th ed., Thousand Oaks: SAGE, 2015.
Sarah Tracy, Qualitative Research Methods: Collecting Evidence, Crafting Analysis, Communicating Impact, Malden: Wiley, 2013.
About the Author:
Dr. Beverly Peters has more than twenty years of experience teaching, conducting qualitative research, and managing community development, microcredit, infrastructure, and democratization projects in several countries in Africa. As a consultant, Dr. Peters worked on EU and USAID funded infrastructure, education, and microcredit projects in South Africa and Mozambique. She also conceptualized and developed the proposal for Darfur Peace and Development Organization’s women’s crisis center, a center that provides physical and economic assistance to women survivors of violence in the IDP camps in Darfur. Dr. Peters has a Ph.D. from the University of Pittsburgh. Learn more about Dr. Peters.
To learn more about American University’s online MS in Measurement & Evaluation or Graduate Certificate in Project Monitoring & Evaluation, request more information or call us toll free at 855-725-7614.