The field of global development has embraced the idea that programs require agile, adaptive approaches to monitoring, evaluation, and learning. But considerable debate still exists around which methods are most appropriate for adaptive learning. Researchers have a range of proven and novel tools to promote a culture of adaptation and learning. These tools include lean testing, rapid prototyping, formative research, and structured experimentation, all of which can be utilized to generate responsive feedback (RF) to improve social change programs. With such an extensive toolkit, how should one decide which methods to employ? In our experience, the level of rigor used should be responsive to the team's level of certainty about the program design being investigated-how certain-or confident-are we that a program design will produce its intended results? With less certainty, less rigor is needed; with more certainty, more rigor is needed. In this article, we present a framework for getting rigor right and illustrate its use in 3 case studies. For each example, we describe the feedback methods used and why, how the approach was implemented (including how we conducted cocreation and ensured buy-in), and the results of each engagement. We conclude with lessons learned from these examples and how to use the right kind of RF mechanism to improve social change programs.
CITATION STYLE
Synowiec, C., Fletcher, E., Heinkel, L., & Salisbury, T. (2023). Getting Rigor Right: A Framework for Methodological Choice in Adaptive Monitoring and Evaluation. Global Health Science and Practice, 11. https://doi.org/10.9745/GHSP-D-22-00243
Mendeley helps you to discover research relevant for your work.