Monitoring, Evaluation, and Learning in Philanthropy: Common Practices and Challenges

By Julia Klebanov

Philanthropies are increasingly turning their attention to monitoring and evaluation (M&E). Some more mature philanthropies have well-established M&E functions, and many newer philanthropies are now looking for ways to incorporate M&E into their work. My experience working in philanthropy M&E has included working at a 20-year-old foundation, and more recently a newer philanthropic organization. Whether they are mature or young, funders are acknowledging the need to build M&E into their giving.

Funders have different reasons for engaging in M&E. Some philanthropies want to design evaluable programs that build in mechanisms for continuously tracking progress and testing assumptions in order to identify and implement necessary adaptations to programming. Others want to use M&E to assess the achievement of outcomes and demonstrate impact (sometimes as a form of accountability to funders).

With the increasing attention to equitable and participatory evaluation, many funders are now using evaluation as a tool to advance equity, and as a mechanism for better serving the communities in which they work. I have also observed many philanthropies placing a growing emphasis on learning, and using M&E to identify lessons learned that can inform future decision making. Many funders are now adding the word “learning” to their department names.

My experience working in monitoring, evaluation, and learning (MEL) has involved a breadth of responsibilities and activities, from beginning to end of the programmatic lifecycle. I work with staff to design theories of change, M&E systems, and learning agendas; facilitate ongoing monitoring and reflection; conduct or facilitate internal evaluations; manage evaluations conducted by external consultants; and facilitate learning opportunities and the process of applying learnings to current and future work. This spectrum of activities helps ensure that MEL is incorporated into all stages of work, which ideally helps staff adopt an evaluative thinking mindset.

During my time working in philanthropic MEL, I have encountered many challenges related to both the implementation of these practices as well as methodological limitations. These challenges have been apparent in my specific work in science philanthropy, but I have learned from working with other funders that these tensions are not uncommon in MEL in philanthropy more broadly.

  • A fundamental question is who evaluation is for. Is evaluation being done as an accountability mechanism for donors? Is it in service of program staff to facilitate their own learning? Is evaluation done for grantees and broader communities? The primary audiences for evaluation may determine how practices are implemented.
  • Many funders face a tension in terms of what level MEL should be conducted. Should it be done at the grant level? At the strategy level? At the program level? How does MEL that is done at the grant level ladder up to the program level?
  • Ensuring value and utility is a continuous effort. To increase the chances that evaluation will be used by staff, and to avoid viewing MEL as a mere compliance activity, I have tried to plan evaluations so that they occur at a point in time where they can add value to shaping a program or informing decision making. I also aim to engage various stakeholders in shaping learning objectives and evaluation questions so that they are responsive to the information needs of both implementers and decision makers.
  • Finally, the “contribution versus attribution” challenge is ongoing. I have found this particularly difficult given that the programs with which I have worked are operating in large, complex systems where establishing attribution is often not possible. Setting expectations among stakeholders is key.

While these tensions can be challenging, I have appreciated getting to discuss them with colleagues from other foundations who do similar work. In fact, one of the things I have most enjoyed about working in philanthropy MEL is the opportunity to engage with other evaluation professionals working in philanthropy. These communities of practice provide valuable forums for sharing lessons learned, identifying commonalities, and surfacing successful practices. As more philanthropies build out their MEL functions, I am excited to see how these communities of practice grow larger and stronger, and how collectively we can advance the practice of MEL in philanthropy.


Julia Klebanov is the Science Program Manager for Monitoring, Evaluation, and Learning at the Chan Zuckerberg Initiative. Julia has spent her career working in philanthropy, both as a grantmaker and M&E professional, with a specific focus on working with science funding programs. Prior to joining CZI, Julia held the position of Adaptive Management and Evaluation Officer for Science at the Gordon and Betty Moore Foundation. She enjoys being able to take a cross-disciplinary approach to her work, drawing on her background in biology, evaluation, and project management. 

Julia holds a BA in Biology and Minor in chemistry from Oberlin College, a project management certification from Stanford University, and a MS in Measurement and Evaluation from American University.