scholarly journals Adapting to Learn and Learning to Adapt: Practical Insights from International Development Projects

Author(s):  
Derick W. Brinkerhoff ◽  
Sarah Frazer ◽  
Lisa McGregor-Mirghani

Adaptive programming and management principles focused on learning, experimentation, and evidence-based decision making are gaining traction with donor agencies and implementing partners in international development. Adaptation calls for using learning to inform adjustments during project implementation. This requires information gathering methods that promote reflection, learning, and adaption, beyond reporting on pre-specified data. A focus on adaptation changes traditional thinking about program cycle. It both erases the boundaries between design, implementation, and evaluation and reframes thinking to consider the complexity of development problems and nonlinear change pathways.Supportive management structures and processes are crucial for fostering adaptive management. Implementers and donors are experimenting with how procurement, contracting, work planning, and reporting can be modified to foster adaptive programming. Well-designed monitoring, evaluation, and learning systems can go beyond meeting accountability and reporting requirements to produce data and learning for evidence-based decision making and adaptive management. It is important to continue experimenting and learning to integrate adaptive programming and management into the operational policies and practices of donor agencies, country partners, and implementers. We need to devote ongoing effort to build the evidence base for the contributions of adaptive management to achieving international development results.

2021 ◽  
Author(s):  
Trina Rytwinski ◽  
Steven J Cooke ◽  
Jessica J Taylor ◽  
Dominique Roche ◽  
Paul A Smith ◽  
...  

Evidence-based decision-making often depends on some form of a synthesis of previous findings. There is growing recognition that systematic reviews, which incorporate a critical appraisal of evidence, are the gold standard synthesis method in applied environmental science. Yet, on a daily basis, environmental practitioners and decision-makers are forced to act even if the evidence base to guide them is insufficient. For example, it is not uncommon for a systematic review to conclude that an evidence base is large but of low reliability. There are also instances where the evidence base is sparse (e.g., one or two empirical studies on a particular taxa or intervention), and no additional evidence arises from a systematic review. In some cases, the systematic review highlights considerable variability in the outcomes of primary studies, which in turn generates ambiguity (e.g., potentially context specific). When the environmental evidence base is ambiguous, biased, or lacking of new information, practitioners must still make management decisions. Waiting for new, higher validity research to be conducted is often unrealistic as many decisions are urgent. Here, we identify the circumstances that can lead to ambiguity, bias, and the absence of additional evidence arising from systematic reviews and provide practical guidance to resolve or handle these scenarios when encountered. Our perspective attempts to highlight that, with evidence synthesis, there may be a need to balance the spirit of evidence-based decision-making and the practical reality that management and conservation decisions and action is often time sensitive.


2020 ◽  
Author(s):  
Arielle Marks-Anglin ◽  
Yong Chen

Publication bias is a well-known threat to the validity of meta-analyses and, more broadly, the reproducibility of scientific findings. When policies and recommendations are predicated on an incomplete evidence-base, it undermines the goals of evidence-based decision-making. Great strides have been made in the last fifty years to understand and address this problem, including calls for mandatory trial registration and the development of statistical methods to detect and correct for publication bias. We offer an historical account of seminal contributions by the evidence synthesis community, with an emphasis on the parallel development of graph-based and selection model approaches. We also draw attention to current innovations and opportunities for future methodological work.


2013 ◽  
Vol 16 (7) ◽  
pp. A386-A387
Author(s):  
K. Tolley ◽  
A. Miners ◽  
J. Brazier ◽  
L.M. Pericleous ◽  
T. Sharma ◽  
...  

2018 ◽  
Vol 43 (1) ◽  
pp. 65-77 ◽  
Author(s):  
Carina Van Rooyen ◽  
Ruth Stewart ◽  
Thea De Wet

Big international development donors such as the UK’s Department for International Development and USAID have recently started using systematic review as a methodology to assess the effectiveness of various development interventions to help them decide what is the ‘best’ intervention to spend money on. Such an approach to evidence-based decision-making has long been practiced in the health sector in the US, UK, and elsewhere but it is relatively new in the development field. In this article we use the case of a systematic review of the impact of microfinance on the poor in sub-Saharan African to indicate how systematic review as a methodology can be used to assess the impact of specific development interventions.


2014 ◽  
Vol 67 (5) ◽  
pp. 790-794 ◽  
Author(s):  
Iván Arribas ◽  
Irene Comeig ◽  
Amparo Urbano ◽  
José Vila

2020 ◽  
pp. 204138662098341
Author(s):  
Marvin Neumann ◽  
A. Susan M. Niessen ◽  
Rob R. Meijer

In personnel- and educational selection, a substantial gap exists between research and practice, since evidence-based assessment instruments and decision-making procedures are underutilized. We provide an overview of studies that investigated interventions to encourage the use of evidence-based assessment methods, or factors related to their use. The most promising studies were grounded in self-determination theory. Training and autonomy in the design of evidence-based assessment methods were positively related to their use, while negative stakeholder perceptions decreased practitioners’ intentions to use evidence-based assessment methods. Use of evidence-based decision-making procedures was positively related to access to such procedures, information to use it, and autonomy over the procedure, but negatively related to receiving outcome feedback. A review of the professional selection literature showed that the implementation of evidence-based assessment was hardly discussed. We conclude with an agenda for future research on encouraging evidence-based assessment practice.


2009 ◽  
Vol 12 ◽  
pp. S12-S17 ◽  
Author(s):  
Gordon G. Liu ◽  
Takashi Fukuda ◽  
Chien Earn Lee ◽  
Vivian Chen ◽  
Qiang Zheng ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document