Bridging the Gap: Evaluation Theory and Practice

Evaluation is a relatively new field that has emerged from a diverse array of applied social sciences. Although it is practice-oriented, there has been a proliferation of research on evaluation theory to prescribe underlying frameworks of evidence-based practice. According to Shadish, Cook, & Leviton (1991), the fundamental purpose of evaluation theory is to specify feasible practices that evaluators can use to construct knowledge about the value of social programs. This explanation of evaluation theory consists of five main components: practice, use, knowledge, valuing, and social programming.

Many practitioners design evaluations around methodology. However, I argue a more holistic approach starts with theory before methodology. Evaluators should first consider the purpose of the evaluation to determine its theoretical foundation, and then develop evaluation questions to inform methodology. Oftentimes, evaluators focus on the technical details of the evaluation rather than the overall purpose. Focusing on theory at the onset of a project ensures the process (i.e. stakeholder involvement, methodology, data collection, analysis, reporting) is intentional, purposeful, and more useful for the client.

This article seeks to bridge the gap between evaluation theory and practice by outlining four theories and contexts in which they are relevant. I also provide resources for implementing each approach.

Utilization-Focused Evaluation Theory

Michael Quinn Patton, PhD, developed Utilization-Focused Evaluation (UFE) on the premise that “evaluations should be judged by their utility and actual use” (Patton, 2013). This theoretical model should be applied when the end goal is instrumental use (i.e. discrete decision-making). UFE focuses on intended use by primary intended users. To engage primary intended users, the evaluator must identify stakeholders who have the most direct, identifiable stake in the evaluation and its results, in other words, the “personal factor” (Patton 2013). The evaluator involves intended users at every stage of the process. The ultimate purpose of UFE is programmatic improvement driven by a psychology of use. Intended users are more likely to use the evaluation if they feel ownership of the process and its results. Use does not happen naturally; therefore, the evaluator must reinforce utility by engaging intended users at each stage of the evaluation. Patton prescribes a 17 step process for facilitating UFE from beginning to end.

Values Engaged Evaluation Theory

Jennifer Greene, PhD, developed Values Engaged Evaluation (VEE) as a democratic approach that is highly responsive to context and emphasizes stakeholder values. VEE seeks to provide contextualized understandings of social programs that have particular promise for underserved and underrepresented populations (Greene, 2011). It is considered a “democratic” approach because it encourages the evaluator to include all relevant stakeholder values. Greene offers three justifications for including stakeholder values: (1) pragmatic (i.e. increases chance of use), (2) emancipatory (i.e. empowers stakeholders), and (3) deliberative (i.e. considers all interests). With this approach, evaluation design and methodology evolves as the evaluator understands the context, needs, and values underlying the program. VEE is concerned with answering broad and in-depth questions, and is more suited for formative rather than summative evaluations. Read more about Greene’s stages of VEE.

Empowerment Evaluation Theory

David Fetterman, PhD, developed Empowerment Evaluation as an approach to foster program improvement through empowerment and self-determination (Fetterman, 2012). Self-determination theory describes an individual’s agency to chart his or own course in life and the ability to identify and express needs. Fetterman believes the evaluator’s role is to empower stakeholders to take ownership of the evaluation process as a vehicle for self-determination. The evaluator engages a diverse range of program stakeholders and acts as a “critical friend” or “coach” while guiding them through the evaluation process. Empowerment evaluation seeks to increase the probability of program success by providing stakeholders with the tools and skills to self-evaluate and mainstream evaluation within their organization. Fetterman outlines three main steps for conducting empowerment evaluation: (1) Develop and refine the “mission,” (2) take stock and prioritize the program’s activities, and (3) plan for the future. Read more on Fetterman’s theory of Empowerment Evaluation.

Theory-Driven Evaluation Theory

Huey Chen, PhD, is one of the main contributors to Theory-Driven Evaluation. His approach focuses on the theory of change and causal mechanisms underlying the program. Chen recognizes that programs exist in an open system, consisting of inputs, outputs, outcomes, and impacts. He suggests that evaluators should start by working with stakeholders to understand the assumptions and intended logic behind the program. A logic model can be used to illustrate the causal relationships between activities and outcomes. Chen offers many suggestions for constructing program theory models, such as action model (i.e. systematic plan for arranging staff, resources, settings to deliver services) and change model (i.e. set of descriptive assumptions about causal processes underlying intervention and outcome). Evaluators should consider using this approach when working with program implementers to produce valuable information for formative program improvement. Read more about Chen’s Theory-Driven approach here.

Conclusion

The four theoretical approaches described do not advocate a particular methodology. Evaluators can use quantitative, qualitative, or a mix of both methods for data collection and analysis. However, before considering methodology, evaluators should reflect on the theoretical frameworks that guide their practice. Although evaluation is an applied science, it is important for practitioners to be knowledgeable of theory to ensure their designs are driven by intention and purpose rather than methodological tools.

References

Chen, H.T. (2015). Practical program evaluation: Theory-Driven Evaluation and the Integrated Evaluation Perspective. Thousand Oaks, CA: Sage.

Fetterman, D.M. (2012). Empowerment Evaluation: Learning to think like an evaluator. In M.C. Alkin (Ed.), Evaluation Roots (2nd edition) (pp. 304-322).

Greene, J.C., Boyce, A.S, & Ahn, J. (2011). Value-Engaged, Educative Evaluation Guidebook. University of Illinois, Urbana-Champaign.

Patton, M.Q. (2013). Utilization-Focused Evaluation (U-FE) Checklist. Western Michigan University Checklists.

Shadish, W. R. Jr., Cook, T. D., & Leviton, L. C. (1991). Chapter 2: Good theory for social program evaluation. Foundations of Program Evaluation: Theories of Practice (pp. 36-67). Newbury Park, CA: Sage.

Nina SabarreNina Sabarre is an evaluation consultant and doctoral student in Evaluation and Applied Research at Claremont Graduate University. She has experience working on mixed-methods research and evaluation projects in over 25 countries across the Middle East, North Africa, and Central Asia. Her current work focuses on international development, public-private partnerships, and data visualization. She holds an M.A. in Political Science and B.A. in Political Science and Philosophy with a minor in Urban Affairs and Planning from Virginia Tech, as well as a graduate certificate in Program Monitoring and Evaluation from American University. To contact or collaborate, e-mail Nina at nsabarre.consultant@gmail.com.

To learn more about American University’s online Graduate Certificate in Project Monitoring and Evaluation program or online MS in Measurement in Evaluation program, request more information or call us toll free at 855-725-7614.