A universal global rating scale for the evaluation of technical skills in the operating room

2007 ◽  
Vol 193 (5) ◽  
pp. 551-555 ◽  
Author(s):  
Jeffrey D. Doyle ◽  
Eric M. Webber ◽  
Ravi S. Sidhu
Surgery Today ◽  
2012 ◽  
Vol 43 (3) ◽  
pp. 271-275 ◽  
Author(s):  
Hiroaki Niitsu ◽  
Naoki Hirabayashi ◽  
Masanori Yoshimitsu ◽  
Takeshi Mimura ◽  
Junya Taomoto ◽  
...  

Author(s):  
M Stavrakas ◽  
G Menexes ◽  
S Triaridis ◽  
P Bamidis ◽  
J Constantinidis ◽  
...  

Abstract Objective This study developed an assessment tool that was based on the objective structured assessment for technical skills principles, to be used for evaluation of surgical skills in cortical mastoidectomy. The objective structured assessment of technical skill is a well-established tool for evaluation of surgical ability. This study also aimed to identify the best material and printing method to make a three-dimensional printed temporal bone model. Methods Twenty-four otolaryngologists in training were asked to perform a cortical mastoidectomy on a three-dimensional printed temporal bone (selective laser sintering resin). They were scored according to the objective structured assessment of technical skill in temporal bone dissection tool developed in this study and an already validated global rating scale. Results Two external assessors scored the candidates, and it was concluded that the objective structured assessment of technical skill in temporal bone dissection tool demonstrated some main aspects of validity and reliability that can be used in training and performance evaluation of technical skills in mastoid surgery. Conclusion Apart from validating the new tool for temporal bone dissection training, the study showed that evolving three-dimensional printing technologies is of high value in simulation training with several advantages over traditional teaching methods.


2020 ◽  
Vol 33 (11) ◽  
pp. 742
Author(s):  
Joana Fernandes Ribeiro ◽  
Manuel Rosete ◽  
Andreia Teixeira ◽  
Hugo Conceição ◽  
Lèlita Santos

Introduction: Technical skills training is fundamental for clinical practice although poorly emphasised in undergraduate medical curricula. In these circumstances, Peer Assisted Learning methodology has emerged as a valid alternative to overcome this insufficiency. The purpose of this study is to evaluate the impact on students of a Peer Assisted Learning program in basic surgical skills, regarding technical competences and knowledge improvement.Material and Methods: A total of 104 randomly selected third year medical students participated in a workshop delivered by fifth year students. From that total, 34 students were assessed before and after the workshop, using the Objective Structured Assessment of Technical Skills instrument, that consists of a global rating scale and a procedure-specific checklist. Sixth year students (control group) were also assessed in their performance without participating in the workshop. Before workshop versus after workshop Objective Structured Assessment of Technical Skills results were compared using Wilcoxon and McNemar tests. After workshop versus control group Objective Structured Assessment of Technical Skills results were compared using Mann-Whitney, qui-squared test and Fisher’s exact test.Results: For the global rating scale, students obtained an after the workshop score (29.5) that was significantly higher than the before the workshop score (15.5; p-value < 0.001), but no significant differences were found between after the workshop and control group scores (p-value = 0.167). For the procedure-specific checklist, 3rd year students had a substantial positive evolution in all parameters and obtained higher rates of correct achievements compared to the control group.Discussion: The final outcomes demonstrated a significant qualitative and quantitative improvement of knowledge and technical skills, which is in accordance with other literature.Conclusion: This Peer Assisted Learning program revealed promising results concerning improvement of surgical skills in medical students, with little staff faculty contribution and extension to a much broader number of students.


2015 ◽  
Vol 9 (1-2) ◽  
pp. 32 ◽  
Author(s):  
Laura Nguyen ◽  
Kim Tardioli ◽  
Matthew Roberts ◽  
James Watterson

Introduction: As residency training requirements increasingly emphasize a competency-based approach, novel tools to directly evaluate Canadian Medical Education Directives for Specialists (CanMEDS) competencies must be developed. Incorporating simulation allows residents to demonstrate knowledge and skills in a safe, standardized environment. We describe a novel hybrid simulation station for use in a urology resident in-training Objective Structured Clinical Exam (OSCE) to assess multiple CanMEDS competencies.Methods: An OSCE station was developed to assess Communicator, Health Advocate, Manager, and Medical Expert (including technical skills) CanMEDS roles. Residents interviewed a standardized patient, interacted with a nurse, performed flexible cystoscopy and attempted stent removal using a novel bladder/stent model. Communication was assessed using the Calgary-Cambridge Observational Guide, knowledge was assessed using a checklist, and technical skills were assessed using a previously validated global rating scale. Video debriefing allowed residents to review their performance. Face and discriminative validity were assessed, and feasibility was determined through qualitative post-examination interviews and cost analysis.Results: All 9 residents (postgraduate years [PGY] 3, 4, 5) completed the OSCE in 15 minutes. Communicator and knowledge scores were similar among all PGYs. Scores in technical skills were higher in PGY-5 compared with PGY-3/4 reside nts (mean score 79% vs. 73%). Residents and exam personnel felt the OSCE station allowed for realistic demonstration of competencies. Equipment cost was $218 for the exam station.Conclusions: We developed and implemented a hybrid simulation- based OSCE station to assess multiple CanMEDS roles. This approach was feasible and cost-effective; it also provided a framework for future development of similar OSCE stations to assess resident competencies across multiple domains.


2017 ◽  
Vol 32 (1) ◽  
pp. 526-535 ◽  
Author(s):  
May Liu ◽  
Shreya Purohit ◽  
Joshua Mazanetz ◽  
Whitney Allen ◽  
Usha S. Kreaden ◽  
...  

2010 ◽  
Vol 1 ◽  
pp. 37-41 ◽  
Author(s):  
Sarah E. Peyre ◽  
Heather MacDonald ◽  
Laila Al-Marayati ◽  
Claire Templeman ◽  
Laila I. Muderspach

2016 ◽  
Vol 2016 ◽  
pp. 1-13 ◽  
Author(s):  
Richard R. McNeer ◽  
Roman Dudaryk ◽  
Nicholas B. Nedeff ◽  
Christopher L. Bennett

Introduction. Medical simulators are used for assessing clinical skills and increasingly for testing hypotheses. We developed and tested an approach for assessing performance in anesthesia residents using screen-based simulation that ensures expert raters remain blinded to subject identity and experimental condition.Methods. Twenty anesthesia residents managed emergencies in an operating room simulator by logging actions through a custom graphical user interface. Two expert raters rated performance based on these entries using custom Global Rating Scale (GRS) and Crisis Management Checklist (CMC) instruments. Interrater reliability was measured by calculating intraclass correlation coefficients (ICC), and internal consistency of the instruments was assessed with Cronbach’s alpha. Agreement between GRS and CMC was measured using Spearman rank correlation (SRC).Results. Interrater agreement (GRS: ICC = 0.825, CMC: ICC = 0.878) and internal consistency (GRS: alpha = 0.838, CMC: alpha = 0.886) were good for both instruments. Subscale analysis indicated that several instrument items can be discarded. GRS and CMC scores were highly correlated (SRC = 0.948).Conclusions. In this pilot study, we demonstrated that screen-based simulation can allow blinded assessment of performance. GRS and CMC instruments demonstrated good rater agreement and internal consistency. We plan to further test construct validity of our instruments by measuring performance in our simulator as a function of training level.


2008 ◽  
Vol 109 (6) ◽  
pp. 1007-1013 ◽  
Author(s):  
Deven B. Chandra ◽  
Georges L. Savoldelli ◽  
Hwan S. Joo ◽  
Israel D. Weiss ◽  
Viren N. Naik

Background Previous studies have indicated that fiberoptic orotracheal intubation (FOI) skills can be learned outside the operating room. The purpose of this study was to determine which of two educational interventions allows learners to gain greater capacity for performing the procedure. Methods Respiratory therapists were randomly assigned to a low-fidelity or high-fidelity training model group. The low-fidelity group was guided by experts, on a nonanatomic model designed to refine fiberoptic manipulation skills. The high-fidelity group practiced their skills on a computerized virtual reality bronchoscopy simulator. After training, subjects performed two consecutive FOIs on healthy, anesthetized patients with predicted "easy" intubations. Each subject's FOI was evaluated by blinded examiners, using a validated global rating scale and checklist. Success and time were also measured. Results Data were analyzed using a two-way mixed design analysis of variance. There was no significant difference between the low-fidelity (n = 14) and high-fidelity (n = 14) model groups when compared with the global rating scale, checklist, time, and success at achieving tracheal intubation (all P = not significant). Second attempts in both groups were significantly better than first attempts (P &lt; 0.001), and there was no interaction between "fidelity of training model" and "first versus second attempt" scores. Conclusions There was no added benefit from training on a costly virtual reality model with respect to transfer of FOI skills to intraoperative patient care. Second attempts in both groups were significantly better than first attempts. Low-fidelity models for FOI training outside the operating room are an alternative for programs with budgetary constraints.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas Zoller ◽  
Tobias Hölle ◽  
Martin Wepler ◽  
Peter Radermacher ◽  
Benedikt L. Nussbaum

Abstract Background Medical simulation trainings lead to an improvement in patient care by increasing technical and non-technical skills, procedural confidence and medical knowledge. For structured simulation-based trainings, objective assessment tools are needed to evaluate the performance during simulation and the learning progress. In surgical education, objective structured assessment of technical skills (OSATS) are widely used and validated. However, in emergency medicine and anesthesia there is a lack of validated assessment tools for technical skills. Thus, the aim of the present study was to develop and validate a novel Global Rating Scale (GRS) for emergency medical simulation trainings. Methods Following the development of the GRS, 12 teams of different experience in emergency medicine (4th year medical students, paramedics, emergency physicians) were involved in a pre-hospital emergency medicine simulation scenario and assessed by four independent raters. Subsequently, interrater reliability and construct validity of the GRS were analyzed. Moreover, the results of the GRS were cross-checked with a task specific check list. Data are presented as median (minimum; maximum). Results The GRS consists of ten items each scored on a 5-point Likert scale yielding a maximum of 50 points. The median score achieved by novice teams was 22.75 points (17;30), while experts scored 39.00 points (32;47). The GRS overall scores significantly discriminated between student-guided teams and expert teams of emergency physicians (p = 0.005). Interrater reliability for the GRS was high with a Kendall’s coefficient of concordance W ranging from 0.64 to 0.90 in 9 of 10 items and 0.88 in the overall score. Conclusion The GRS represents a promising novel tool to objectively assess technical skills in simulation training with high construct validity and interrater reliability in this pilot study.


2019 ◽  
Author(s):  
Jacek Chmielewski ◽  
Włodzimierz Łuczyński ◽  
Jakub Dobroch ◽  
Grzegorz Cebula ◽  
Tomasz Bielecki ◽  
...  

Abstract Background High fidelity medical simulations allow for teaching medical skills in safe and realistic conditions. Pediatric teams of emergency departments work under extreme stress, which affects high-level cognitive functions, specifically attention and memory, and increases the already high stakes for young doctors. Lapses in attention increase the risk of serious consequences such as medical errors, failure to recognize life-threatening signs and symptoms, and other essential patient safety issues. Mindfulness as a process of intentionally paying attention to each moment with curiosity, openness and acceptance of each experience without judgment can potentially contribute to improving the performance of medical teams in conditions of pediatric emergency. The aim of the study was to determine whether the actions of medical students in the course of pediatric high fidelity simulations are related to their mindfulness. Participants and methods A total of 166 standardized simulations were conducted among students of medicine in three simulation centers of medical universities, assessing: stress sensation, technical skills (checklists), non-technical skills (Ottawa Crisis Resource Management Global Rating Scale) and mindfulness using Five Facet Mindfulness Questionnaire. Results The perception of stress among students was lower and more motivating if they were more mindful. Mindfulness of students correlated positively with avoiding fixation error, but negatively with listening to and managing the team. The lowest scores among non-technical skills were noted in the area of ​​situational awareness (fixation error). In subsequent simulations of the same team, students' non-technical skills improved, although no change was noted in their technical skills. Conclusions The results of our research indicate the relationship between the mindfulness of medical students and their non-technical skills and the perception of stress in pediatric emergency simulations. Further research is needed to show whether mindfulness training leads to any changes in this field. Trial registration ClinicalTrials.gov, NCT03761355).


Sign in / Sign up

Export Citation Format

Share Document