How evaluators analyze qualitative data largely depends on the design of their evaluations. Your analysis is meant to turn your data into findings, and your evaluation design guides both the parameters of the data you have collected, as well as how you will analyze it.
It might sound simple, but I usually start qualitative data analysis by becoming very familiar with my data. I review the raw data, that is, my field notes, and any analytical or summary notes I compiled after engaging in qualitative research. Raw data and analytical pieces are both very important as we try to make sense and interpret data. These might also point us in the direction of additional data we need to collect.
Patton (2015) discusses several frameworks that evaluators might use to collect and analyze data. For example, a case study design would be an in-depth study of a particular case, where the evaluator collects comprehensive, systematic data. The case might be an individual, group, or organization that gives insight into project outputs. The evaluator might use ethnographic methods to find cultural patterns that give insight into project outcomes. Content analysis might help the evaluator to identify, organize, or categorize texts to understand new ideas or skills, or changes in attitudes or behavior.
As Patton (2015) notes, several other data analysis frameworks exist. As you develop your evaluation design, you should consider your evaluation questions and indicators and the most appropriate way to frame and analyze your data.
A common method that evaluators use to analyze qualitative data is triangulation, which involves taking data, finding themes, coding them, and then comparing or triangulating the data from different data sources and different data collection methods. The goal is to collect data in a particular category, until the point of saturation, and then code and compare that data. Ultimately an evaluator would compare data from multiple methods, collected from multiple sources, collected on multiple occasions over time—for instance, observation, participant observation, interviews, focus groups, and mapping (perhaps in addition to quantitative data in a mixed method evaluation).
Graham Gibbs’ (2008) work focuses on ways to code and analyze qualitative data so that we do not lose the richness of qualitative data during analysis. Coding is how an evaluator defines what is important about the data. Coding also helps evaluators to organize their thinking around data. It involves identifying major themes in the data, and assigning these themes codes so that you can easily categorize, organize, retrieve, and examine the data. You might ask what is going on, what people are doing, and what people are saying, for example.
Having carried out the research, we might approach our data with coding labels in mind or we might try to be open and let the data decide the codes. Either way, coding helps us to create categories of data. Our codes could be descriptive, or they would preferably be analytical or theoretical. In all cases, we need to saturate the categories so that we have enough data to draw conclusions.
Gibbs (2008) suggests that researchers take a transcript of the data, become familiar with it, find categories, and then set codes or labels for those categories. Evaluators must create definitions for the codes, so that they know what belongs in a particular category. The definitions need to be specific enough so that you have meaningful comparisons, but not so specific that you have very few cases to add to your categories. You also need to ensure that your definition is not so encompassing that you end up stretching your concept so far that comparison is not possible.
You are likely thinking about comparisons all the time as you review your data. You compare the meanings of words; you compare experiences; you ask under what circumstances something is likely; and you try to think of remote examples. Evaluators engage in what Gibbs (2008) calls constant comparison.
Access the full Qualitative Methods series list here.
Michael Quinn Patton, Qualitative Research and Evaluation Methods, 4th ed., Thousand Oaks: SAGE, 2015.
Graham Gibbs, Analyzing Qualitative Data, Thousand Oaks: SAGE, 2008.
About the Author
Dr. Beverly Peters has more than twenty years of experience teaching, conducting qualitative research, and managing community development, microcredit, infrastructure, and democratization projects in several countries in Africa. As a consultant, Dr. Peters worked on EU and USAID funded infrastructure, education, and microcredit projects in South Africa and Mozambique. She also conceptualized and developed the proposal for Darfur Peace and Development Organization’s women’s crisis center, a center that provides physical and economic assistance to women survivors of violence in the IDP camps in Darfur. Dr. Peters has a Ph.D. from the University of Pittsburgh. Learn more about Dr. Peters.
To learn more about American University’s online MS in Measurement & Evaluation or Graduate Certificate in Project Monitoring & Evaluation, request more information or call us toll free at 855-725-7614.