The Challenges of Monitoring and Evaluation in the Workplace

Many organizations that implement community projects or programs are beginning to recognize the need for monitoring and evaluation. Though leadership oftentimes agrees with the need for a monitoring system or a way for evaluating the efficacy of a project, in practice, designing and implementing a new monitoring system has its challenges. Four such challenges include stakeholder buy-in, logical frameworks, technology shortfalls, and timelines.

Challenge #1: Stakeholder Buy-In

Stakeholders within any given organization or project may approach monitoring from very different perspectives. The goals of a monitoring system, as well as the desired outcomes or reports, may be very different depending on who you ask among the stakeholders. This can be extremely challenging when trying to design a monitoring system that meets the needs of the organization, but also provides meaningful information for all stakeholders. Communication and buy-in is key to the successful design and launch of a monitoring system. Bringing in stakeholders to participate in the early conversations around the monitoring system is critical to long-term success.

Challenge #2: Logical Frameworks

Very often an organization has well-developed programming, without a Logical Framework to measure progress. Some organizations may never have considered what indicators to use to measure outputs and outcomes. Depending on the size of the organization, there may be a large number of indicators that could be measured, either broadly or with specificity. This presents a challenge when both financial and personnel resources are limited. Prioritizing the project objectives and designing a monitoring process to align with those objectives takes dedication and a lot of communication to ensure that all stakeholders are in agreement with priorities. In some situations, designing a logical framework might take several months to determine the most important indicators that should be measured and the outcomes that are expected. Determining thresholds to delineate between compliance and quality also presents its own challenges when quality has not been specifically defined by the organization.

Oftentimes, organizations don’t have the research or baseline data to provide a rationale for indicators of quality. Stakeholders and researchers may disagree on what measures should be used to determine quality. Without the resources to dedicate to research, an organization may choose to bring in experts in the specific field to help determine indicators, and what the expected outcomes could be. Additionally, some organizations may choose to define those parameters for themselves, although that is rare.

Challenge #3: Technology

Technology plays a large role in data collection and data analysis. Most organizations I have worked with find themselves lacking the proper financial resources to invest in the technology needed for their monitoring or evaluation systems. In some cases, the system will need to be used by field staff who don’t have access to internet in their respective project locations or will be doing site-visits in rural areas without Wi-Fi or cell phone service. While mobile apps to collect data have become more common, organizations need to consider the connectivity needed to access these apps and the training needed for staff to use them. Although in the long term, using technology to track and analyze data means a smaller margin of error, quality assurance systems can be implemented to ensure the validity of data collected and entered manually by field staff. That seems to be the most reasonable option for most organizations who don’t have the funding to purchase or custom design their own monitoring IT system.

Challenge #4: Timeline

For large organizations with multiple projects, it could take a few years to develop a monitoring system designed to evaluate that specific organization and their efforts. Evaluators need to understand the organization and its goals, get buy-in from stakeholders on the indicators and measures of success, and then develop and propose a monitoring system. For best practice, pilots should be conducted to ensure the data collected is valid and relevant to the indicators. Ideally, organizations would want to collect baseline information if possible before implementation of a project. These design phases can take several months or even years before the system is fully developed. Racing against the clock when it comes to designing and implementing a monitoring system can be challenging as steps are removed from the process in order to save time.

About the Author

Sherry FahmiSherry Fahmi is a Program Specialist in the Oversight and Accountability Division at the Office of Child Care with the Department of Health and Human Services. Ms. Fahmi has previously worked on Head Start Monitoring and prior to that was a preschool teacher. Ms. Fahmi enjoys traveling and trying new local food spots in D.C. She has participated in multiple overseas mission trips that led to her interest in project monitoring and evaluation. She has a graduate certificate in Project Monitoring and Evaluation from American University.

To learn more about American University’s online MS in Measurement & Evaluation or Graduate Certificate in Project Monitoring & Evaluation, request more information or call us toll free at 855-725-7614.