Asking Program Evaluation Questions

Dr. Beverly Peters has written recently about qualitative interviewing and conducting and using focus groups. To build off of qualitative interviews and focus groups, we need to think about what sort of questions we ask in program evaluation. Before we decide what types of data we need (qualitative or quantitative) we need to know the bigger question of the project: What specific problem does the project or program address? 

Evaluation questions, similar to research questions in academic research projects, guide the methods and tools used to collect data to understand the problem under investigation. Evaluation questions may seem intuitive, and thus be quickly developed to get to the more detailed program planning. But, without well-developed, relevant, and accurate evaluation questions, developed with stakeholders connected to the problem, projects can move around a problem without addressing the most appropriate issues.

Methodologically, evaluation questions focus on varied assessment types. The following list is not exhaustive nor mutually exclusive but provides a framework for thinking about what kind of evaluation you are doing and what (generic) evaluation questions you may need to ask.

1. Needs assessment, or identifying the surrounding social conditions and need for a program. These questions identify and support the problem that the intervention hopes to address. If the problem is not identifiable by stakeholder communities, projects addressing it will not be successful. Needs assessment evaluation questions may include focus on the significance of the problem, drawing on literature, previous projects, and baseline data with potential stakeholders. Some sample questions are:

  1. What are the target population’s characteristics?
  2. What are their needs?
  3. What specific services are needed?
  4. How could those services be provided? Through what mechanisms or arrangements?

2. Program theory assessment, or identifying how the program intends to address the problem. Programs that are already running should have a theory of change, or how they think their intervention will lead to the stated outcome, objective, goal, or impact they hope to see. Program theory assessment evaluation questions should focus on this theory of change to see if there are gaps in logical connections or inaccurate assumptions. Some sample questions are:

  1. Who is the target population?
  2. What services do they need?
  3. How best should the project/ program deliver those services (or activities)? 
  4. How will the program work with the target population to sustain the project/ program?
  5. What is the structure or organization of the project/ program?
  6. What resources does the project/ program need?

3. Process evaluation, or how the program addresses the problem, what it does, what the program services are and how the program operates. Process evaluation questions focus on how a program is working, program performance, and involve extensive monitoring. Similarly, formative evaluation questions look at whether program activities occur according to plan or the project is achieving its goals while it is underway. Some sample questions are:

  1. What are the underlying assumptions of the project/ program?
  2. Are objectives met? If so, how? If not, why not?
  3. Are activities conducted with the target population?
  4. Are there other populations the program should be working with?
  5. Is the target population adequately reached by and involved in activities?
  6. How does the target population interact with the program?
  7. What do they think of the services? Are they satisfied?
  8. How is the project functioning from administrative, organizational, and/or personnel perspectives?

4. Impact/ outcome evaluation, or how does the program reach its outcomes or impact? The evaluation questions may also be used in summative evaluations which focus on what happened after the program or project completed, i.e., were goals achieved? And what can be learned? Some sample questions are:

  1. What are the outputs, outcomes, objectives, and goals of the project?
  2. Are outcomes, objectives, and goals achieved?
  3. Are the project/program services/activities beneficial to the target population?
  4. Do they have negative effects? e. Is the target population affected by the project/ program equitably or according to the evaluation plan?
  5. Is the problem that the project/ program intends to address alleviated?

5. Assessment of efficiency, or how cost-effective is the program. Sample questions are:

  1. Is the cost of the services or activities reasonable in relation to the benefits?
  2. Are there alternative approaches that could have the same outcomes with less cost?*

Through asking questions, M&E practitioners can identify what their project specifically should address. According to Owen and Rogers (1999), there are three levels of evaluation questions at this stage in project planning:

  1. Policy level – how does, or could, the evaluation impact relevant policy?
  2. Program level (regional, large scale, “Big P”) – how does, or could, the evaluation effect program changes?
  3. Project level (local, activity based, “little p”) – how does, or could, the evaluation effect project or local changes?**

The best questions must be developed with stakeholders in the evaluation, including program staff, sponsors and funders, local and regional decision-makers within and outside the program, and community representatives, when the community in which the evaluation or project will be carried out has already been identified. These consultations may be informal conversations, reviewing grant requirements and terms of reference (documentation review), or semi-structured individual and/or group interviewing. The evaluator consults with all accessible stakeholders to develop specific questions that the evaluation will seek to answer. According to Rossi, Freeman, and Lipsey (1999), evaluation questions must be:

  • Reasonable and appropriate, or realistic in the given project or program.
  • Answerable, similar to the reasonableness of a question, good evaluation questions must be able to be answered to some degree of certainty. If questions are too vague or broad, or require data that is unavailable or unobservable, they are not answerable.
  • Based on program goals and objectives.

Once we have developed the larger question of the project, M&E practitioners begin to consider what data they need to answer the question, using a Theory of Change and a LogFrame.

References
*Adapted from Rossi, P. H., Freeman, H. E., & Lipsey, M. W. (1999). Evaluation: A Systematic Approach, 6th Edition. London: Sage Publications. Chapters 2 and 3.
** Adapted from Owen, J. M., & Rogers, P. J. (1999). From Evaluation Questions to Evaluation Findings. In Program Evaluation (pp. 86–105). London: SAGE Publications Ltd.

Additional Resources
Morra Imas, L. G., & Rist, R. C. (2009). The Road to Results: Designing and conducting effective development evaluations. Washington, DC: The World Bank.
Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., & Befani, B. (2012). Broadening the Sharing the of Designs and Benefits of Trade Methods for Impact Evaluations: Report of a study commissioned by the Department for International Development. Retrieved from http://www.dfid.gov.uk/Documents/publications1/design-method-impact-eval.pdf
Stufflebeam, D. L., Madaus, G. F., & Kellaghan, T. (2002). Evaluation models: Viewpoints on educational and human services evaluation. New York: Kluwer Academic Publishers.

About the Author
Ally Krupar is an Adjunct Instructor at American University’s School of Professional and Extended Studies where she teaches Qualitative Methods in Monitoring and Evaluation. She is also a Doctoral Candidate in Adult Education and Comparative International Education at Pennsylvania State University and a Visiting Researcher with RET, an international organization providing secondary and post-secondary education to displaced peoples worldwide. She holds a BA in Anthropology from Case Western Reserve University and an MA from the School of International Service at American University.

To learn more about American University’s online Graduate Certificate in Project Monitoring and Evaluation, request more information or call us toll free at 855-725-7614.