scholarly journals Development of a novel global rating scale for objective structured assessment of technical skills in an emergency medical simulation training

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas Zoller ◽  
Tobias Hölle ◽  
Martin Wepler ◽  
Peter Radermacher ◽  
Benedikt L. Nussbaum

Abstract Background Medical simulation trainings lead to an improvement in patient care by increasing technical and non-technical skills, procedural confidence and medical knowledge. For structured simulation-based trainings, objective assessment tools are needed to evaluate the performance during simulation and the learning progress. In surgical education, objective structured assessment of technical skills (OSATS) are widely used and validated. However, in emergency medicine and anesthesia there is a lack of validated assessment tools for technical skills. Thus, the aim of the present study was to develop and validate a novel Global Rating Scale (GRS) for emergency medical simulation trainings. Methods Following the development of the GRS, 12 teams of different experience in emergency medicine (4th year medical students, paramedics, emergency physicians) were involved in a pre-hospital emergency medicine simulation scenario and assessed by four independent raters. Subsequently, interrater reliability and construct validity of the GRS were analyzed. Moreover, the results of the GRS were cross-checked with a task specific check list. Data are presented as median (minimum; maximum). Results The GRS consists of ten items each scored on a 5-point Likert scale yielding a maximum of 50 points. The median score achieved by novice teams was 22.75 points (17;30), while experts scored 39.00 points (32;47). The GRS overall scores significantly discriminated between student-guided teams and expert teams of emergency physicians (p = 0.005). Interrater reliability for the GRS was high with a Kendall’s coefficient of concordance W ranging from 0.64 to 0.90 in 9 of 10 items and 0.88 in the overall score. Conclusion The GRS represents a promising novel tool to objectively assess technical skills in simulation training with high construct validity and interrater reliability in this pilot study.

Author(s):  
M Stavrakas ◽  
G Menexes ◽  
S Triaridis ◽  
P Bamidis ◽  
J Constantinidis ◽  
...  

Abstract Objective This study developed an assessment tool that was based on the objective structured assessment for technical skills principles, to be used for evaluation of surgical skills in cortical mastoidectomy. The objective structured assessment of technical skill is a well-established tool for evaluation of surgical ability. This study also aimed to identify the best material and printing method to make a three-dimensional printed temporal bone model. Methods Twenty-four otolaryngologists in training were asked to perform a cortical mastoidectomy on a three-dimensional printed temporal bone (selective laser sintering resin). They were scored according to the objective structured assessment of technical skill in temporal bone dissection tool developed in this study and an already validated global rating scale. Results Two external assessors scored the candidates, and it was concluded that the objective structured assessment of technical skill in temporal bone dissection tool demonstrated some main aspects of validity and reliability that can be used in training and performance evaluation of technical skills in mastoid surgery. Conclusion Apart from validating the new tool for temporal bone dissection training, the study showed that evolving three-dimensional printing technologies is of high value in simulation training with several advantages over traditional teaching methods.


2020 ◽  
Vol 33 (11) ◽  
pp. 742
Author(s):  
Joana Fernandes Ribeiro ◽  
Manuel Rosete ◽  
Andreia Teixeira ◽  
Hugo Conceição ◽  
Lèlita Santos

Introduction: Technical skills training is fundamental for clinical practice although poorly emphasised in undergraduate medical curricula. In these circumstances, Peer Assisted Learning methodology has emerged as a valid alternative to overcome this insufficiency. The purpose of this study is to evaluate the impact on students of a Peer Assisted Learning program in basic surgical skills, regarding technical competences and knowledge improvement.Material and Methods: A total of 104 randomly selected third year medical students participated in a workshop delivered by fifth year students. From that total, 34 students were assessed before and after the workshop, using the Objective Structured Assessment of Technical Skills instrument, that consists of a global rating scale and a procedure-specific checklist. Sixth year students (control group) were also assessed in their performance without participating in the workshop. Before workshop versus after workshop Objective Structured Assessment of Technical Skills results were compared using Wilcoxon and McNemar tests. After workshop versus control group Objective Structured Assessment of Technical Skills results were compared using Mann-Whitney, qui-squared test and Fisher’s exact test.Results: For the global rating scale, students obtained an after the workshop score (29.5) that was significantly higher than the before the workshop score (15.5; p-value < 0.001), but no significant differences were found between after the workshop and control group scores (p-value = 0.167). For the procedure-specific checklist, 3rd year students had a substantial positive evolution in all parameters and obtained higher rates of correct achievements compared to the control group.Discussion: The final outcomes demonstrated a significant qualitative and quantitative improvement of knowledge and technical skills, which is in accordance with other literature.Conclusion: This Peer Assisted Learning program revealed promising results concerning improvement of surgical skills in medical students, with little staff faculty contribution and extension to a much broader number of students.


2017 ◽  
Vol 158 (1) ◽  
pp. 54-61 ◽  
Author(s):  
Érika Mercier ◽  
Ségolène Chagnon-Monarque ◽  
François Lavigne ◽  
Tareck Ayad

Objectives The primary goal is the indexation of validated methods used to assess surgical competency in otorhinolaryngology–head and neck surgery (ORL-HNS) residents. Secondary goals include assessment of the reliability and validity of these tools, as well as the documentation of specific procedures in ORL-HNS involved. Data Sources MEDBASE, OVID, Medline, CINAHL, and EBM, as well as the printed references, available through the Université de Montréal library. Review Methods The PRISMA method was used to review digital and printed databases. Publications were reviewed by 2 independent reviewers, and selected articles were fully analyzed to classify evaluation methods and categorize them by procedure and subspecialty of ORL-HNS involved. Reliability and validity were assessed and scored for each assessment tool. Results Through the review of 30 studies, 5 evaluation methods were described and validated to assess the surgical competency of ORL-HNS residents. The evaluation method most often described was the combined Global Rating Scale and Task-Specific Checklist tool. Reliability and validity for this tool were overall high; however, considerable data were unavailable. Eleven distinctive surgical procedures were studied, encompassing many subspecialties of ORL-HNS: facial plastics, general ear-nose-throat, laryngology, otology, pediatrics, and rhinology. Conclusions Although assessment tools have been developed for an array of surgical procedures, involving most ORL-HNS subspecialties, the use of combined checklists has been repeatedly validated in the literature and shown to be easily applicable in practice. It has been applied to many ORL-HNS procedures but not in oncologic surgery to date.


2021 ◽  
Author(s):  
Jeremie Traoré ◽  
Frédéric Balen ◽  
Thomas Geeraerts ◽  
Sandrine Charpentier ◽  
Xavier Dubucs ◽  
...  

Abstract Background: During simulation training, the confederate is a member of the pedagogical team. Its role is to facilitate the interaction between participants and the environment, and is thought to increase realism and immersion. Its influence on participants' performance in full-scale simulation remains however unknown. The purpose of this study was to explore the effect of the presence of a confederate on the participants’ performance during full-scale simulation of crisis medical situations. Methods: This was a prospective, randomized study comparing 2 parallel groups. Participants were emergency medicine residents engaging in a simulation session, with or without confederates. Participants were then evaluated on their Crisis Resource Management performance (CRM). The overall performance score on the Ottawa Global Rating Scale was assessed as primary outcome and the 5 non-technical CRM skills as secondary outcomes.Results: A total of 63 simulation sessions, including 63 residents, were included for statistical analysis (n= 32 for Control group and 31 for Confederate group). The mean Overall Performance score was 3.9± 0.8 in the Control group and 4.0± 1.1 in the Confederate group, 95% confidence interval of the difference [-0.6; 0.4], p=0.60. No significant differences between the two groups were observed on each CRM items (leadership, situational awareness, communication, problem solving, resource utilization)Conclusion: In this randomized and controlled study, the presence of confederates during full-scale simulated practice of crisis medical situations does not seem to improve the CRM skills performance of Emergency medicine residents. Trial registration: this study does not need to be registered on Clintrial as it doesn’t report a health care intervention on human participants.


2015 ◽  
Vol 9 (1-2) ◽  
pp. 32 ◽  
Author(s):  
Laura Nguyen ◽  
Kim Tardioli ◽  
Matthew Roberts ◽  
James Watterson

Introduction: As residency training requirements increasingly emphasize a competency-based approach, novel tools to directly evaluate Canadian Medical Education Directives for Specialists (CanMEDS) competencies must be developed. Incorporating simulation allows residents to demonstrate knowledge and skills in a safe, standardized environment. We describe a novel hybrid simulation station for use in a urology resident in-training Objective Structured Clinical Exam (OSCE) to assess multiple CanMEDS competencies.Methods: An OSCE station was developed to assess Communicator, Health Advocate, Manager, and Medical Expert (including technical skills) CanMEDS roles. Residents interviewed a standardized patient, interacted with a nurse, performed flexible cystoscopy and attempted stent removal using a novel bladder/stent model. Communication was assessed using the Calgary-Cambridge Observational Guide, knowledge was assessed using a checklist, and technical skills were assessed using a previously validated global rating scale. Video debriefing allowed residents to review their performance. Face and discriminative validity were assessed, and feasibility was determined through qualitative post-examination interviews and cost analysis.Results: All 9 residents (postgraduate years [PGY] 3, 4, 5) completed the OSCE in 15 minutes. Communicator and knowledge scores were similar among all PGYs. Scores in technical skills were higher in PGY-5 compared with PGY-3/4 reside nts (mean score 79% vs. 73%). Residents and exam personnel felt the OSCE station allowed for realistic demonstration of competencies. Equipment cost was $218 for the exam station.Conclusions: We developed and implemented a hybrid simulation- based OSCE station to assess multiple CanMEDS roles. This approach was feasible and cost-effective; it also provided a framework for future development of similar OSCE stations to assess resident competencies across multiple domains.


2017 ◽  
Vol 32 (1) ◽  
pp. 526-535 ◽  
Author(s):  
May Liu ◽  
Shreya Purohit ◽  
Joshua Mazanetz ◽  
Whitney Allen ◽  
Usha S. Kreaden ◽  
...  

2010 ◽  
Vol 1 ◽  
pp. 37-41 ◽  
Author(s):  
Sarah E. Peyre ◽  
Heather MacDonald ◽  
Laila Al-Marayati ◽  
Claire Templeman ◽  
Laila I. Muderspach

2014 ◽  
Vol 2014 ◽  
pp. 1-5
Author(s):  
Mari H. Roberts ◽  
Elizabeth K. Boucher ◽  
Michael Lim ◽  
Antony R. Wilkes ◽  
Iljaz Hodzovic

Background. In this pilot study, we evaluated tip collisions against three commonly used measures of fibreoptic scope handling skills. Methods. Seventy-seven anaesthetists were recruited to perform a standardized task on an Oxford Box and a modified AirSim manikin. Collision count was correlated against time to scope placement, a global rating scale score, and up-to-date fibreoptic experience. Results. Strong and moderate correlations were found between collision count and task completion time for the Oxford Box (ρ = 0.730, P < 0.0001) and AirSim manikin (ρ = 0.405, P < 0.0001), respectively. Moderate correlation was found between collision count and global rating scale score for the Oxford Box (ρ = −0.545, P < 0.0001) and AirSim manikin (ρ = −0.500, P < 0.0001). Mild and moderate correlations were found between collision count and fibreoptic experience on the Oxford Box (ρ = −0.240, P = 0.041) and AirSim manikin (ρ = −0.423, P < 0.0001), respectively. Conclusions. This study infers that collision count may be used as a measure of fibreoptic scope handling skills in simulation training. Using this outcome in addition to other measures of performance may improve accuracy and precision of fibreoptic scope placement.


Surgery Today ◽  
2012 ◽  
Vol 43 (3) ◽  
pp. 271-275 ◽  
Author(s):  
Hiroaki Niitsu ◽  
Naoki Hirabayashi ◽  
Masanori Yoshimitsu ◽  
Takeshi Mimura ◽  
Junya Taomoto ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document