Reliability of Simulation-based Assessment for Practicing Physicians: Performance is Context-Specific

2021 ◽  
Author(s):  
Elizabeth Sinz ◽  
Arna Banerjee ◽  
Randolph Steadman ◽  
Matthew S Shotwell ◽  
Jason Slagle ◽  
...  

Abstract Introduction: Even physicians who routinely work in complex, dynamic practices may be unprepared to optimally manage challenging critical events. High-fidelity simulation can realistically mimic critical clinically relevant events, however the reliability and validity of simulation-based assessment scores for practicing physicians has not been established.Methods: Standardised complex simulation scenarios were developed and administered to board-certified, practicing anesthesiologists who volunteered to participate in an assessment study during formative maintenance of certification activities. A subset of the study population agreed to participate as the primary responder in a second scenario for this study. The physicians were assessed independently by trained raters on both teamwork/behavioural and technical performance measures. Analysis using Generalisability and Decision studies were completed for the two scenarios with two raters.Results: The behavioural score was not more reliable than the technical score. With two raters > 20 scenarios would be required to achieve a reliability estimate of 0.7. Increasing the number of raters for a given scenario would have little effect on reliability.Conclusions: The performance of practicing physicians on simulated critical events may be highly context-specific. Realistic simulation-based assessment for practicing physicians is resource-intensive and may be best-suited for individualized formative feedback. More importantly, aggregate data from a population of participants may have an even higher impact if used to identify skill or knowledge gaps to be addressed by training programs and inform continuing education improvements across the profession.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Elizabeth Sinz ◽  
Arna Banerjee ◽  
Randolph Steadman ◽  
Matthew S. Shotwell ◽  
Jason Slagle ◽  
...  

Abstract Introduction Even physicians who routinely work in complex, dynamic practices may be unprepared to optimally manage challenging critical events. High-fidelity simulation can realistically mimic critical clinically relevant events, however the reliability and validity of simulation-based assessment scores for practicing physicians has not been established. Methods Standardised complex simulation scenarios were developed and administered to board-certified, practicing anesthesiologists who volunteered to participate in an assessment study during formative maintenance of certification activities. A subset of the study population agreed to participate as the primary responder in a second scenario for this study. The physicians were assessed independently by trained raters on both teamwork/behavioural and technical performance measures. Analysis using Generalisability and Decision studies were completed for the two scenarios with two raters. Results The behavioural score was not more reliable than the technical score. With two raters > 20 scenarios would be required to achieve a reliability estimate of 0.7. Increasing the number of raters for a given scenario would have little effect on reliability. Conclusions The performance of practicing physicians on simulated critical events may be highly context-specific. Realistic simulation-based assessment for practicing physicians is resource-intensive and may be best-suited for individualized formative feedback. More importantly, aggregate data from a population of participants may have an even higher impact if used to identify skill or knowledge gaps to be addressed by training programs and inform continuing education improvements across the profession.


2020 ◽  
Author(s):  
Elizabeth Sinz ◽  
Arna Banerjee ◽  
Randolph Steadman ◽  
Matthew S Shotwell ◽  
Jason Slagle ◽  
...  

Abstract Introduction: Even physicians who routinely work in complex, dynamic practices may be unprepared to optimally manage challenging critical events. High-fidelity simulation can realistically mimic critical clinically relevant events, however the reliability and validity of simulation-based assessment scores for practicing physicians has not been established.Methods: Standardized complex simulation scenarios were developed and administered to board-certified, practicing anesthesiologists who volunteered to participate in an assessment study during formative maintenance of certification activities. A subset of the study population agreed to participate as the primary responder in a second scenario for this study. The physicians were assessed independently by trained raters on both teamwork/behavioral and technical performance measures. Analysis using Generalizability and Decision studies were completed for the two scenarios with two raters.Results: The technical score was not more reliable than the behavioral score. With two raters > 20 scenarios would be required to achieve a reliability estimate of 0.7. Increasing the number of raters would have little effect on reliability.Discussion: The performance of practicing physicians on simulated critical events may be highly context-specific. Realistic simulation-based assessment for practicing physicians is resource-intensive and may be best-suited for individualized formative feedback. Moreover, aggregate data from a population of participants may yield even higher impact if used to identify skill or knowledge gaps to be addressed by training programs and continuing education improvements across the profession.


2003 ◽  
Vol 99 (6) ◽  
pp. 1270-1280 ◽  
Author(s):  
John R. Boulet ◽  
David Murray ◽  
Joe Kras ◽  
Julie Woodhouse ◽  
John McAllister ◽  
...  

Background Medical students and residents are expected to be able to manage a variety of critical events after training, but many of these individuals have limited clinical experiences in the diagnosis and treatment of these conditions. Life-sized mannequins that model critical events can be used to evaluate the skills required to manage and treat acute medical conditions. The purpose of this study was to develop and test simulation exercises and associated scoring methods that could be used to evaluate the acute care skills of final-year medical students and first-year residents. Methods The authors developed and tested 10 simulated acute care situations that clinical faculty at a major medical school expects graduating physicians to be able to recognize and treat at the conclusion of training. Forty medical students and residents participated in the evaluation of the exercises. Four faculty members scored the students/residents. Results The reliability of the simulation scores was moderate and was most strongly influenced by the choice and number of simulated encounters. The validity of the simulation scores was supported through comparisons of students'/residents' performances in relation to their clinical backgrounds and experience. Conclusion Acute care skills can be validly and reliably measured using a simulation technology. However, multiple simulated encounters, covering a broad domain, are needed to effectively and accurately estimate student/resident abilities in acute care settings.


2014 ◽  
Vol 120 (1) ◽  
pp. 129-141 ◽  
Author(s):  
Richard H. Blum ◽  
John R. Boulet ◽  
Jeffrey B. Cooper ◽  
Sharon L. Muret-Wagstaff

Abstract Background: Valid methods are needed to identify anesthesia resident performance gaps early in training. However, many assessment tools in medicine have not been properly validated. The authors designed and tested use of a behaviorally anchored scale, as part of a multiscenario simulation-based assessment system, to identify high- and low-performing residents with regard to domains of greatest concern to expert anesthesiology faculty. Methods: An expert faculty panel derived five key behavioral domains of interest by using a Delphi process (1) Synthesizes information to formulate a clear anesthetic plan; (2) Implements a plan based on changing conditions; (3) Demonstrates effective interpersonal and communication skills with patients and staff; (4) Identifies ways to improve performance; and (5) Recognizes own limits. Seven simulation scenarios spanning pre-to-postoperative encounters were used to assess performances of 22 first-year residents and 8 fellows from two institutions. Two of 10 trained faculty raters blinded to trainee program and training level scored each performance independently by using a behaviorally anchored rating scale. Residents, fellows, facilitators, and raters completed surveys. Results: Evidence supporting the reliability and validity of the assessment scores was procured, including a high generalizability coefficient (ρ2 = 0.81) and expected performance differences between first-year resident and fellow participants. A majority of trainees, facilitators, and raters judged the assessment to be useful, realistic, and representative of critical skills required for safe practice. Conclusion: The study provides initial evidence to support the validity of a simulation-based performance assessment system for identifying critical gaps in safe anesthesia resident performance early in training.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ji Hye Yu ◽  
Mi Jin Lee ◽  
Soon Sun Kim ◽  
Min Jae Yang ◽  
Hyo Jung Cho ◽  
...  

Abstract Background High-fidelity simulators are highly useful in assessing clinical competency; they enable reliable and valid evaluation. Recently, the importance of peer assessment has been highlighted in healthcare education, and studies using peer assessment in healthcare, such as medicine, nursing, dentistry, and pharmacy, have examined the value of peer assessment. This study aimed to analyze inter-rater reliability between peers and instructors and examine differences in scores between peers and instructors in the assessment of high-fidelity-simulation-based clinical performance by medical students. Methods This study analyzed the results of two clinical performance assessments of 34 groups of fifth-year students at Ajou University School of Medicine in 2020. This study utilized a modified Queen’s Simulation Assessment Tool to measure four categories: primary assessment, diagnostic actions, therapeutic actions, and communication. In order to estimate inter-rater reliability, this study calculated the intraclass correlation coefficient and used the Bland and Altman method to analyze agreement between raters. A t-test was conducted to analyze the differences in evaluation scores between colleagues and faculty members. Group differences in assessment scores between peers and instructors were analyzed using the independent t-test. Results Overall inter-rater reliability of clinical performance assessments was high. In addition, there were no significant differences in overall assessment scores between peers and instructors in the areas of primary assessment, diagnostic actions, therapeutic actions, and communication. Conclusions The results indicated that peer assessment can be used as a reliable assessment method compared to instructor assessment when evaluating clinical competency using high-fidelity simulators. Efforts should be made to enable medical students to actively participate in the evaluation process as fellow assessors in high-fidelity-simulation-based assessment of clinical performance in situations similar to real clinical settings.


2018 ◽  
Vol 20 (1) ◽  
Author(s):  
Viola Janse van Vuuren ◽  
Eunice Seekoe ◽  
Daniel Ter Goon

Although nurse educators are aware of the advantages of simulation-based training, some still feel uncomfortable to use technology or lack the motivation to learn how to use the technology. The aging population of nurse educators causes frustration and anxiety. They struggle with how to include these tools particularly in the light of faculty shortages. Nursing education programmes are increasingly adopting simulation in both undergraduate and graduate curricula. The aim of this study was to determine the perceptions of nurse educators regarding the use of high fidelity simulation (HFS) in nursing education at a South African private nursing college. A national survey of nurse educators and clinical training specialists was completed with 118 participants; however, only 79 completed the survey. The findings indicate that everyone is at the same level as far as technology readiness is concerned, however, it does not play a significant role in the use of HFS. These findings support the educators’ need for training to adequately prepare them to use simulation equipment. There is a need for further research to determine what other factors play a role in the use of HFS; and if the benefits of HFS are superior to other teaching strategies warranting the time and financial commitment. The findings of this study can be used as guidelines for other institutions to prepare their teaching staff in the use of HFS.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Sally Byford ◽  
Sarah Janssens ◽  
Rachel Cook

Abstract Background Transvaginal ultrasound (TVUS) training opportunities are limited due to its intimate nature; however, TVUS is an important component of early pregnancy assessment. Simulation can bridge this learning gap. Aim To describe and measure the effect of a transvaginal ultrasound simulation programme for obstetric registrars. Materials and methods The transvaginal ultrasound simulation training (TRUSST) curriculum consisted of supported practice using virtual reality transvaginal simulators (ScanTrainer, Medaphor) and communication skills training to assist obstetric registrars in obtaining required competencies to accurately and holistically care for women with early pregnancy complications. Trainee experience of live transvaginal scanning was evaluated with a questionnaire. Programme evaluation was by pre-post self-reported confidence level and objective pre-post training assessment using Objective Structured Assessment of Ultrasound Skills (OSAUS) and modified Royal Australian and New Zealand College of Obstetrics and Gynaecology assessment scores. Quantitative data was compared using paired t tests. Results Fifteen obstetric registrars completed the programme. Numbers of performed live transvaginal ultrasound by trainees were low. Participants reported an increase in confidence level in performing a TVUS following training: mean pre score 1.6/5, mean post score 3/5. Objective assessments improved significantly across both OSAUS and RANZCOG scores following training; mean improvement scores 7.6 points (95% CI 6.2–8.9, p < 0.05) and 32.5 (95% CI 26.4–38.6, p < 0.05) respectively. It was noted that scores for a systematic approach and documentation were most improved: 1.9 (95% CI 1.4–2.5, p < 0.05) and 2.1 (95% CI 1.5–2.7, p < 0.05) respectively. Conclusion The implementation of a simulation-based training curriculum resulted in improved confidence and ability in TVUS scanning, especially with regard to a systematic approach and documentation.


2018 ◽  
Vol 32 (6) ◽  
pp. 727-738
Author(s):  
Cindy Chamberland ◽  
Helen M. Hodgetts ◽  
Chelsea Kramer ◽  
Esther Breton ◽  
Gilles Chiniara ◽  
...  

2021 ◽  
Vol 113 (1) ◽  
pp. 101-110
Author(s):  
Juan I. Cobián ◽  
◽  
Federico Ferrero ◽  
Martín P. Alonso ◽  
Alberto M. Fontana

Background: Learning complex tasks in surgical requires the coordination and integration of technical and non-technical skills have an impact on the performance of work teams. Objective: The aim of this study is to report the results of a simulation-based educational strategy for training in complex surgical skills considering the participants’ perceptions. Material and methods: In 2019, 10 healthcare professionals participated in a 20-hour course divided in 6 hours of online training and 14 hours of onsite training. The strategy designed included the integration of case resolution activities, role-playing, practice with synthetic and virtual simulators and high-fidelity simulation. At the end of the course, a questionnaire was administered to explore participants’ perceptions on what they had learned and on their attitude changes. Results: Fifty percent of the participants perceived their skills and knowledge improved at the end of the course compared with their perception at the beginning of the course while 80% perceived the impact of the course on their professional activity was good or excellent. All the participants agreed with the need for improving non-technical skills. The experience was rated as positive or very positive by all participants, who were eager to repeat it. Conclusion: The participants’ perceptions of this educational program demonstrates that this method is highly accepted. Raising awareness of non-technical skills during the reflection stage suggests the need for changes in attitude and in self-perception of efficacy. We believe that simulation-based training offers the possibility of improving the overall performance of the surgical team. Future studies should focus on this goal.


Author(s):  
Thomas E. Doyle ◽  
David Musson ◽  
Jon-Michael J Booth

The skill of visualization is fundamental to the teaching and learning of engineering design and graphics. Implicit in any skill is the ability to improve with training and practice. This study examines visualization performance using three teaching modalities of a Freshmen Design and Graphics course: 1) Traditional, 2) Project based Dissection, and 3) Simulation based Design. The first and second modalities focused assessment on the part/assembly form, whereas the third modality transitioned the outcome expectations to understanding and function of mechanism design. A shift of focus from Traditional (Form) to Simulation (Function) was expected to positively effect visualization performance. Analogously, medical education and practice also require visualization and high-fidelity simulation has provided numerous positive outcomes for the practice of medicine. Comparison of a random population of 375 from each year indicated a decline in the average visualization scores. Further analysis revealed that highest 100 and 250 exam score populations show improvement in average scores with consistent variance. This paper will examine simulation based learning in medicine and engineering, present our findings on the comparison between teaching modalities, and discuss the reasons for the unexpected bifurcation of results.


Sign in / Sign up

Export Citation Format

Share Document