Artificial Intelligence (AI) Policies and Responsible Use of AI in Evaluation, January 2025

Instructor: Linda Raftree and Zach Tilton

Organizations are increasingly being asked to use artificial intelligence (AI) to enhance their evaluation practices. To guide employees in leveraging these new technologies, organizations must develop AI policies to clarify what is and is not allowable and what ethical considerations must be taken into account to ensure its responsible use. In this course, participants will learn about the principles of AI-enabled evaluation and how to develop an AI Policy to guide their work. They will review sample policies, consider what should and should not be included in a policy, and learn about best practices in launching an AI Policy and overseeing compliance.

By the end of the course, participants will be well-equipped to develop or enhance their organizations’ AI policies, helping ensure that the strategies are ethical, effective, aligned with their organizational goals, and will ultimately drive significant improvements in program outcomes and operational efficiency.

This program is delivered in two virtual, instructor-led modules. Read about each module below. Classes will take place online via Zoom. Each participant who completes both modules will receive a certificate of completion at the end of the course.

Please contact us with any questions at learningcenter@encompassworld.com

Module 1: The Case for AI-enabled Evaluation and Responsible Use
January 13, 9 a.m.–12 p.m. EDT

This module sets the stage for understanding the role AI can play in evaluation and the need for organizations to develop AI policies that encourage responsible use. The course starts with a foundational look at what AI-enabled evaluation entails. Participants will assess their current familiarity with the application of AI in evaluation and gain an overview of the state of the field. The module covers essential topics such as the ethical considerations of using AI in evaluation, principles of critical AI literacy, and the competencies needed for AI-literate evaluators and elements of AI-enabled evaluation functions. It concludes with an interactive Q&A session to discuss specific organizational needs and AI’s potential applications in monitoring, evaluation, research, and learning (MERL).

Module 2: Getting Started with an AI Policy
January 14, 9 a.m.–12 p.m. EDT

The session begins with an introduction to organizational AI-enabled evaluability, using capacity assessments to identify organizational pain points, and exploring enablers and barriers to AI adoption. Participants will learn the steps involved in developing an AI-enabled evaluation policy using an asset-based, principles-focused approach, and how to manage AI-enabled evaluation tasks with a comprehensive checklist. Examples of policies and small-group work will help participants think through the design and development of AI policies that will support the evaluation life cycle, including inception, data collection, data analysis, and reporting. The instructors will touch on best practices related to launching a policy and overseeing compliance, and conclude with a segment on the role of meta-evaluation in supporting quality assurance.

We hope to see you online soon! Please contact us with any questions at learningcenter@encompassworld.com.

Skip to content