scholarly journals Conducting After-Action Reviews and Retrospects

2017 ◽  
pp. 823-825
Author(s):  
Olivier Serrat
Keyword(s):  
2011 ◽  
Author(s):  
Ira Schurig ◽  
Steven Jarrett ◽  
Winfred Arthur ◽  
Ryan M. Glaze ◽  
Margaret Schurig

2011 ◽  
Author(s):  
Richard Topolski ◽  
Chris Green ◽  
Bruce Leibrecht ◽  
Nicole Rossi
Keyword(s):  

Evaluation ◽  
2017 ◽  
Vol 23 (3) ◽  
pp. 294-311 ◽  
Author(s):  
Boru Douthwaite ◽  
John Mayne ◽  
Cynthia McDougall ◽  
Rodrigo Paz-Ybarnegaray

There is a growing recognition that programs that seek to change people’s lives are intervening in complex systems, which puts a particular set of requirements on program monitoring and evaluation. Developing complexity-aware program monitoring and evaluation systems within existing organizations is difficult because they challenge traditional orthodoxy. Little has been written about the practical experience of doing so. This article describes the development of a complexity-aware evaluation approach in the CGIAR Research Program on Aquatic Agricultural Systems. We outline the design and methods used including trend lines, panel data, after action reviews, building and testing theories of change, outcome evidencing and realist synthesis. We identify and describe a set of design principles for developing complexity-aware program monitoring and evaluation. Finally, we discuss important lessons and recommendations for other programs facing similar challenges. These include developing evaluation designs that meet both learning and accountability requirements; making evaluation a part of a program’s overall approach to achieving impact; and, ensuring evaluation cumulatively builds useful theory as to how different types of program trigger change in different contexts.


Author(s):  
Gonzalo J. Muñoz ◽  
Diego A. Cortéz ◽  
Constanza B. Álvarez ◽  
Juan A. Raggio ◽  
Antonia Concha ◽  
...  

Objective The present study examined the effectiveness of after-action reviews (AARs; also known as debriefing) in mitigating skill decay. Background Research on the long-term effectiveness of AARs is meager. To address this gap in the literature, we conducted an experimental study that also overcomes some research design issues that characterize the limited extant research. Method Eighty-four participants were randomly assigned to an AAR or non-AAR condition and trained to operate a PC-based fire emergency simulator. During the initial acquisition phase, individuals in the AAR condition were allowed to review their performance after each practice session, whereas individuals in the non-AAR condition completed a filler task. About 12 weeks later, participants returned to the lab to complete four additional practice sessions using a similar scenario (i.e., the retention and reacquisition phase). Results The performance of participants in the AAR condition degraded more after nonuse but also recovered faster than the performance of participants in the non-AAR condition, although these effects were fairly small and not statistically significant. Conclusion Consistent with the limited research on the long-term effectiveness of AARs, our findings failed to support their effectiveness as a decay-prevention intervention. Because the present study was conducted in a laboratory setting using a relatively small sample of undergraduate students, additional research is warranted. Application Based on the results of the present study, we suggest some additional strategies that trainers might consider to support long-term skill retention when using AARs.


2016 ◽  
Vol 29 (5) ◽  
pp. 408-427 ◽  
Author(s):  
Steven M. Jarrett ◽  
Ryan M. Glaze ◽  
Ira Schurig ◽  
Gonzalo J. Muñoz ◽  
Andrew M. Naber ◽  
...  

2020 ◽  
Vol 16 (1) ◽  
Author(s):  
Flavia Riccardo ◽  
Francesco Bolici ◽  
Mario Fafangel ◽  
Verica Jovanovic ◽  
Maja Socan ◽  
...  

2017 ◽  
Vol 96 ◽  
pp. 84-92 ◽  
Author(s):  
John Crowe ◽  
Joseph A. Allen ◽  
Cliff W. Scott ◽  
Mackenzie Harms ◽  
Michael Yoerger
Keyword(s):  

Author(s):  
Nicholas John Milton

A model for evidence-based learning is presented, consisting of a series of steps: observation, insight, learning, action assignment, validation, and change. This model is applied at a number of different scales within the oil sector. Firstly, team learning uses After Action Reviews to learn within project teams. Discussions take place within the team, and any changes are to team processes. A more complex form of learning is learning from one project to another, using facilitated lessons-identification meetings. Lessons are collected, actions are assigned, and changes made to organizational processes. Finally, in analyses of major incidents, an investigation team is tasked with collecting observations, insights, learnings, and even recommendations for action. The same workflow and the same development process can be seen for lessons, but the degree of rigor and the attention to governance varies.


Sign in / Sign up

Export Citation Format

Share Document