Session Submission Summary
Share...

Direct link:

Improving Effectiveness and Sustainability by Localizing Evidence: Models for Testing, Scaling, and Institutionalizing What Works

Tue, February 21, 6:30 to 8:00pm EST (6:30 to 8:00pm EST), Grand Hyatt Washington, Floor: Constitution Level (3B), Roosevelt

Group Submission Type: Formal Panel Session

Proposal

Globally, the body of evidence on effective educational practices continues to grow and evolve. As it grows, it provides an expanding menu of options for policymakers and program designers who aim to make education higher quality and more equitable. However, the effectiveness of interventions, and how that effectiveness varies by participant characteristics, is dependent on many context-specific factors. We know from global evidence that certain approaches are generally effective, but how can the design elements be best applied in a local context to maximize effectiveness, and ensure that benefits accrue equitably? For example, we know that coaching is an effective method for changing teachers' behavior. Based on this information, implementers can begin to develop interventions. The specifics of the interventions, though, may require testing in the particular context. Who are the coaches? What is the coaching method/approach? What is the appropriate amount of contact time? How will it be monitored for quality and effectiveness in this particular context? How can the program ensure that all students benefit? All of these questions can only be answered via testing in the specific context and with end users (teachers, coaches, administrators) for whom it is designed.

Iterative testing is needed to answer critical design and implementation questions – not only about effectiveness and equity, but also about scaling and how to integrate effective programs into existing systems in each context. How will the intervention or program fit within the existing system? Is it affordable to integrate into the system without losing key features? Which modalities/combinations of modalities are most cost-effective? Will the program generate impacts equitably as it scales?

Successfully navigating this journey from an idea to meet scale up and sustainability goals has been a challenge for many education interventions. More often than not, practitioners scale without the necessary evidence. Frequently, this is because the right evidence does not exist. However, even when evidence or data exists, it might fail to inform program design for a few key reasons: First, program teams, seeking to confirm their hypotheses, may look for evidence that confirms their point of view and ignore evidence or data that refutes it. Second, program design and implementation teams could get caught in inertia, advancing through program adaptation or development in order to adhere to timelines rather than pausing to fully digest data on program performance. And finally, evaluation is regularly decoupled from scale. Evaluations are often summative and backward looking — they occur after a project is over and budgets are exhausted. This means there is no incentive to make changes recommended by the research, or even to learn more about how to improve the model. The result can be ineffective programming, wasted resources, or a lack of interest or trust in data.

Any attempt to avoid these pitfalls and integrate evidence into the program design and scale process must address these issues. New processes and models are needed to support the systematic generation and use of evidence in not only achieving the desired impact, but in adapting interventions to work in the local context, scaling interventions to test effectiveness when implemented with a larger population, ensuring equity in effectiveness at scale, and integrating programming into existing systems to sustain both programming and outcomes.

This session will showcase several models or pathways for testing interventions to promote effective evidence-based practice:

-First, Educate!, an NGO that tackles youth unemployment in Africa, will share their Program Design Process, which leverages evidence to inform design at four distinct stages of program development. As the program advances through the stages, the rigor and methodology of the evaluation changes to meet the learning needs of the program. Programs must produce positive evaluation results in order to advance to the next stage of scale and to unlock the funding required to achieve that scale.
-Next, JSI will present their Pilot-Scale-Practice model, which was used to test and sustain interventions aimed at improving supply chain practices and access to medicines to treat common childhood illnesses. Using this model, implementers transform interventions through three distinguishable stages to maximize impact: pilot - laying the foundation for scale and institutionalization; scale - transforming pilot successes into practice at scale using a strategy built on evidence from the pilot; practice - integrating successful practices into organizational structures so that the benefits can be realized more broadly within the system.
-The USAID Center for Education will then share how its technical experts have adapted JSI’s Pilot-Scale-Practice model to define a process for evidence-based program design and implementation, which is now being used as the basis for internal professional development and technical assistance to USAID missions.
-Finally, USAID/Egypt will present a case study of the growth of Career Development Centers in Egypt’s public universities. Started in 2012, USAID has played a pivotal role in scaling these centers to more than eleven public institutions.

The discussion portion of the session will examine similarities and differences between the processes proposed by the different panelists, how the various models determine evaluation methodology, barriers to evidence use and scale, and lessons learned from the various contexts represented by the panel.

Sub Unit

Chair

Individual Presentations

Discussant