Abstract image with short form (EVAL ME IDIQ) of the USAID Monitoring and Evaluation Services contract name

There Is No “I” in Evaluate: Four Lessons from EnCompass’ Participatory Evaluation Partnership with USAID

Written by: Jonathan Jones

EnCompass is excited to announce its award as a holder of the USAID EVAL ME II IDIQ. This umbrella contract for monitoring and evaluation services allows us to continue our participatory evaluation partnership with USAID—supporting evidence, engagement, and learning in programs around the world.

In the context of USAID’s Transformation and the country leadership embodied in the Journey to Self-Reliance, partnerships between USAID missions and an organization like EnCompass have incredible potential. This is because we have always believed, since our founding nearly 21 years ago, that our work only succeeds when it is led by those whose lives will change as a result. That is a guiding principle for all of our work, in evaluation and beyond. In the rest of this article, we share several lessons from EnCompass’ work with USAID missions, partners, and participants under the first EVAL ME IDIQ and other monitoring, evaluation, and learning (MEL) activities.

Lesson 1: By starting from strengths, we strengthen evaluation’s contribution to learning

In 2015, USAID awarded the first round of EVAL ME IDIQ contracts to EnCompass and 13 other small business partners. This global, 5-year program was designed to provide high-quality technical and advisory services for evaluation, capacity strengthening, and performance monitoring. Our teams are currently implementing a long-term monitoring contract for the Bureau for Latin America and the Caribbean, as well as intensive M&E support for the Office of Transition Initiatives, and completed three participatory, utilization-focused evaluations under the first IDIQ. This work has leveraged EnCompass’ multisectoral expertise in evaluation, our experience supporting the use of evidence for decision making in complex situations, and our strengths-based mindset that draws on participation from the start.

Block quote: Appreciative Inquiry directs evaluation participants to study success … However, it does not mean an evaluation that is biased toward the positive.

EnCompass teams apply Appreciative Inquiry principles during design, data collection, interpretation of findings, and developing recommendations. This involves an explicit focus on identifying strengths and factors of successes and generating innovation for continuous improvement. Appreciative Inquiry directs evaluation participants to study success and provide their insights about a program or organization through that study. However, it does not mean an evaluation that is biased toward the positive. Indeed, we find that it enables frank discussions about challenges and disappointments more effectively than more traditional data collection approaches.

Lesson 2: By investing in participatory design, we build utilization into evaluation’s DNA

This “lesson” might not be big news, but it bears repeating, as evaluators and evaluation commissioners are talking about strategies for pivoting in response to COVID-19 and for conducting unbiased evaluations that maintain the crucial participatory elements that generate useful results for improved programming.

One of our earlier EVAL-ME task orders was an evaluation of two Global Development Lab programs focused on access to digital- and mobile-based programs and services, the Mobile Solutions Technical Assistance and Research program and the Global Broadband and Innovations Alliance. This evaluation, looking at two programs, required full agreement on the scope and use of limited resources (people and funds). To achieve these goals, we developed an inception report and used a facilitative process that helped get our client, stakeholders from two programs, and our team on the same page about the desired outcome from the evaluation, and how results would be used.

With limited resources, knowing what was and was not possible from the outset helped us define a scope and identify limitations for this study. Doing this up front using participatory processes is one reason the inception phase is so important. With clear parameters documented early on, data collection and analysis can be positioned to generate useful evidence and recommendations. In short, we have seen how communicating, communicating, and then communicating some more enables agreement on evaluation aims and sets the stage for a useful evaluation report—one our client truly applies in adapting programs.

The context for the MEASURE Evaluation IV midterm performance evaluation was also complex. With a global set of stakeholders and multiple country case studies to complete in a tight timeline related to planning for the next procurement, and with a tight budget, our approach to design was crucial in shaping an effective, useful evaluation that would provide actionable recommendations for program improvements. Even with constrained funds and a short time frame, we decided to invest in a participatory design meeting with USAID/Washington and the program implementing partner to understand the range of questions, what results we were looking for, the kinds of information that was available, and the multitude of factors that go into evaluation planning timeline. That investment took a bite out of our budget, but set us up for a more effective evaluation that provided good insights for programming adaptation.

Block quote: That investment took a bite out of our budget, but it set us up for a more effective evaluation that provided good insights for programming adaptation.

Similarly, our team conducting the midterm performance evaluation of USAID’s ASPIRE Activity in Malawi demonstrated the value of investing in participatory design. Because we worked closely with the USAID Mission and our Malawian team members from the start, we were ready to adapt when things changed. For example, when a teachers’ strike coincided with the intended start date for data collection in schools, we all adapted and agreed on new time frames that would work for all involved. Our understanding of our client and stakeholders for this evaluation also meant we were able to validate evaluation evidence and conclusions with the client, supporting their buy-in and ensuring their ownership. Perhaps most importantly, we had already talked with stakeholders about what evaluation products they needed to learn from the evaluation. Our concise, data-rich briefing package focused on big-picture implications and acted a bridge to help our busy target audiences dig into the full evaluation report.

Lesson 3: Evaluation is often local, but good evidence is global.

Good evaluators know that their task relies on the full contributions of program participants and other local stakeholders—people who live and work the programs we’re observing and analyzing. Effective evaluation, for EnCompass, means identifying, integrating, and enhancing local strengths. Our teams for the ASPIRE and MEASURE evaluations did this consistently, relying on local data collectors and enumerators, for example, and integrating country stakeholders in validation of findings.

Other EnCompass-supported research and evaluation programs have taken similar approaches, such as the USAID ASSIST activity’s standard practice of pairing an evaluation lead from headquarters with an evaluation lead from a local partner. They shared responsibility for quality and improved capacity on both sides regarding the contextual realities of implementation research, while strengthening local capacity in evaluation design and utilization-focused research.

Block quote: Our monitoring and evaluation work might directly support improvements in a single program, but also contributes to a global body of evidence.

Global research programs such as ASSIST, USAID’s Data and Evidence for Education Programs (DEEP) activity, and country- and regional platforms such as the USAID Monitoring, Evaluation, and Learning for Sustainability activity are also contributing a base of evidence that supports learning and adaptation at multiple levels—from local programming improvements to country-level USAID strategy, to regional evidence synthesis and global best practice. Our M&E work might directly support improvements in a single program, but also contributes to a global body of high-quality evidence.

Our DEEP team’s recent synthesis of education data for Sub-Saharan African countries is an example of how important it is to invest in putting good and emerging evidence into the hands of USAID missions and program implementers, as is the Feed the Future AWE Program’s synthesis of gender integration in USAID’s agricultural research investments.

As we have begun supporting more country-level platforms, from gender integration in Lebanon to sustainable MEL in Peru and other countries, EnCompass is excited to contribute our organizational approaches for facilitation and learning to support systems-level improvements for global development.

Lesson 4: There is no “I” in evaluation.

EnCompass’ global team lives and breathes participatory MEL. As a mature small business twice the size we were in 2015, we have that much more capacity, a larger team, and proven management strengths to support USAID missions and country partners as we all continue to move away from traditional models of development to more self-reliant approaches.

We are now, all of us, living the reality that has always been a guiding principle for EnCompass’ work—that our shared success relies on emphasizing the partnership that is inherent in the idea of being a USAID implementing partner. When we do that, we lend our expertise as one part of a greater whole in the journey to locally led, self-reliant development.

Multiple members of the EnCompass evaluation team contributed to the content of this article.

Jonathan Jones

Director, MEL

Jonathan Jones is EnCompass' Director of MEL. He has led many complex evaluations and has twelve years of direct fieldwork experience in over 20 developing countries. He is a thematic expert in monitoring and evaluation of democracy assistance programs, but has led M&E efforts in several sectors. He has also designed and delivered training on different M&E topics for international development funders and partners around the world. Dr. Jones has significant experience in participatory approaches to evaluation and is adept at ensuring that the evaluation process is useful for key audiences. He believes deeply in the foundational role that evaluation plays in ensuring the right evidence is on hand to inform strategic decisions. Dr. Jones taught graduate level courses on M&E at the George Washington University and Georgetown, and holds a PhD in Political Science from the University of Florida.

Leave a Reply

Skip to content