Assessment of Robotic Console Skills (ARCS): construct validity of a novel global rating scale for technical skills in robotically assisted surgery

2017 ◽  
Vol 32 (1) ◽  
pp. 526-535 ◽  
Author(s):  
May Liu ◽  
Shreya Purohit ◽  
Joshua Mazanetz ◽  
Whitney Allen ◽  
Usha S. Kreaden ◽  
...  
Author(s):  
M Stavrakas ◽  
G Menexes ◽  
S Triaridis ◽  
P Bamidis ◽  
J Constantinidis ◽  
...  

Abstract Objective This study developed an assessment tool that was based on the objective structured assessment for technical skills principles, to be used for evaluation of surgical skills in cortical mastoidectomy. The objective structured assessment of technical skill is a well-established tool for evaluation of surgical ability. This study also aimed to identify the best material and printing method to make a three-dimensional printed temporal bone model. Methods Twenty-four otolaryngologists in training were asked to perform a cortical mastoidectomy on a three-dimensional printed temporal bone (selective laser sintering resin). They were scored according to the objective structured assessment of technical skill in temporal bone dissection tool developed in this study and an already validated global rating scale. Results Two external assessors scored the candidates, and it was concluded that the objective structured assessment of technical skill in temporal bone dissection tool demonstrated some main aspects of validity and reliability that can be used in training and performance evaluation of technical skills in mastoid surgery. Conclusion Apart from validating the new tool for temporal bone dissection training, the study showed that evolving three-dimensional printing technologies is of high value in simulation training with several advantages over traditional teaching methods.


2020 ◽  
Vol 33 (11) ◽  
pp. 742
Author(s):  
Joana Fernandes Ribeiro ◽  
Manuel Rosete ◽  
Andreia Teixeira ◽  
Hugo Conceição ◽  
Lèlita Santos

Introduction: Technical skills training is fundamental for clinical practice although poorly emphasised in undergraduate medical curricula. In these circumstances, Peer Assisted Learning methodology has emerged as a valid alternative to overcome this insufficiency. The purpose of this study is to evaluate the impact on students of a Peer Assisted Learning program in basic surgical skills, regarding technical competences and knowledge improvement.Material and Methods: A total of 104 randomly selected third year medical students participated in a workshop delivered by fifth year students. From that total, 34 students were assessed before and after the workshop, using the Objective Structured Assessment of Technical Skills instrument, that consists of a global rating scale and a procedure-specific checklist. Sixth year students (control group) were also assessed in their performance without participating in the workshop. Before workshop versus after workshop Objective Structured Assessment of Technical Skills results were compared using Wilcoxon and McNemar tests. After workshop versus control group Objective Structured Assessment of Technical Skills results were compared using Mann-Whitney, qui-squared test and Fisher’s exact test.Results: For the global rating scale, students obtained an after the workshop score (29.5) that was significantly higher than the before the workshop score (15.5; p-value < 0.001), but no significant differences were found between after the workshop and control group scores (p-value = 0.167). For the procedure-specific checklist, 3rd year students had a substantial positive evolution in all parameters and obtained higher rates of correct achievements compared to the control group.Discussion: The final outcomes demonstrated a significant qualitative and quantitative improvement of knowledge and technical skills, which is in accordance with other literature.Conclusion: This Peer Assisted Learning program revealed promising results concerning improvement of surgical skills in medical students, with little staff faculty contribution and extension to a much broader number of students.


2016 ◽  
Vol 57 (1-2) ◽  
pp. 1-9 ◽  
Author(s):  
Felix Nickel ◽  
Jonathan D. Hendrie ◽  
Christian Stock ◽  
Mohamed Salama ◽  
Anas A. Preukschas ◽  
...  

Purpose: The validated Objective Structured Assessment of Technical Skills (OSATS) score is used for evaluating laparoscopic surgical performance. It consists of two subscores, a Global Rating Scale (GRS) and a Specific Technical Skills (STS) scale. The OSATS has accepted construct validity for direct observation ratings by experts to discriminate between trainees' levels of experience. Expert time is scarce. Endoscopic video recordings would facilitate assessment with the OSATS. We aimed to compare video OSATS with direct OSATS. Methods: We included 79 participants with different levels of experience [58 medical students, 15 junior residents (novices), and 6 experts]. Performance of a cadaveric porcine laparoscopic cholecystectomy (LC) was evaluated with OSATS by blinded expert raters by direct observation and then as an endoscopic video recording. Operative time was recorded. Results: Direct OSATS rating and video OSATS rating correlated significantly (ρ = 0.33, p = 0.005). Significant construct validity was found for direct OSATS in distinguishing between students or novices and experts. Students and novices were not different in direct OSATS or video OSATS. Mean operative times varied for students (73.4 ± 9.0 min), novices (65.2 ± 22.3 min), and experts (46.8 ± 19.9 min). Internal consistency was high between the GRS and STS subscores for both direct and video OSATS with Cronbach's α of 0.76 and 0.86, respectively. Video OSATS and operative time in combination was a better predictor of direct OSATS than each single parameter. Conclusion: Direct OSATS rating was better than endoscopic video rating for differentiating between students or novices and experts for LC and should remain the standard approach for the discrimination of experience levels. However, in the absence of experts for direct rating, video OSATS supplemented with operative time should be used instead of single parameters for predicting direct OSATS scores.


2015 ◽  
Vol 9 (1-2) ◽  
pp. 32 ◽  
Author(s):  
Laura Nguyen ◽  
Kim Tardioli ◽  
Matthew Roberts ◽  
James Watterson

Introduction: As residency training requirements increasingly emphasize a competency-based approach, novel tools to directly evaluate Canadian Medical Education Directives for Specialists (CanMEDS) competencies must be developed. Incorporating simulation allows residents to demonstrate knowledge and skills in a safe, standardized environment. We describe a novel hybrid simulation station for use in a urology resident in-training Objective Structured Clinical Exam (OSCE) to assess multiple CanMEDS competencies.Methods: An OSCE station was developed to assess Communicator, Health Advocate, Manager, and Medical Expert (including technical skills) CanMEDS roles. Residents interviewed a standardized patient, interacted with a nurse, performed flexible cystoscopy and attempted stent removal using a novel bladder/stent model. Communication was assessed using the Calgary-Cambridge Observational Guide, knowledge was assessed using a checklist, and technical skills were assessed using a previously validated global rating scale. Video debriefing allowed residents to review their performance. Face and discriminative validity were assessed, and feasibility was determined through qualitative post-examination interviews and cost analysis.Results: All 9 residents (postgraduate years [PGY] 3, 4, 5) completed the OSCE in 15 minutes. Communicator and knowledge scores were similar among all PGYs. Scores in technical skills were higher in PGY-5 compared with PGY-3/4 reside nts (mean score 79% vs. 73%). Residents and exam personnel felt the OSCE station allowed for realistic demonstration of competencies. Equipment cost was $218 for the exam station.Conclusions: We developed and implemented a hybrid simulation- based OSCE station to assess multiple CanMEDS roles. This approach was feasible and cost-effective; it also provided a framework for future development of similar OSCE stations to assess resident competencies across multiple domains.


2007 ◽  
Vol 193 (5) ◽  
pp. 551-555 ◽  
Author(s):  
Jeffrey D. Doyle ◽  
Eric M. Webber ◽  
Ravi S. Sidhu

2010 ◽  
Vol 1 ◽  
pp. 37-41 ◽  
Author(s):  
Sarah E. Peyre ◽  
Heather MacDonald ◽  
Laila Al-Marayati ◽  
Claire Templeman ◽  
Laila I. Muderspach

Surgery Today ◽  
2012 ◽  
Vol 43 (3) ◽  
pp. 271-275 ◽  
Author(s):  
Hiroaki Niitsu ◽  
Naoki Hirabayashi ◽  
Masanori Yoshimitsu ◽  
Takeshi Mimura ◽  
Junya Taomoto ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Andreas Zoller ◽  
Tobias Hölle ◽  
Martin Wepler ◽  
Peter Radermacher ◽  
Benedikt L. Nussbaum

Abstract Background Medical simulation trainings lead to an improvement in patient care by increasing technical and non-technical skills, procedural confidence and medical knowledge. For structured simulation-based trainings, objective assessment tools are needed to evaluate the performance during simulation and the learning progress. In surgical education, objective structured assessment of technical skills (OSATS) are widely used and validated. However, in emergency medicine and anesthesia there is a lack of validated assessment tools for technical skills. Thus, the aim of the present study was to develop and validate a novel Global Rating Scale (GRS) for emergency medical simulation trainings. Methods Following the development of the GRS, 12 teams of different experience in emergency medicine (4th year medical students, paramedics, emergency physicians) were involved in a pre-hospital emergency medicine simulation scenario and assessed by four independent raters. Subsequently, interrater reliability and construct validity of the GRS were analyzed. Moreover, the results of the GRS were cross-checked with a task specific check list. Data are presented as median (minimum; maximum). Results The GRS consists of ten items each scored on a 5-point Likert scale yielding a maximum of 50 points. The median score achieved by novice teams was 22.75 points (17;30), while experts scored 39.00 points (32;47). The GRS overall scores significantly discriminated between student-guided teams and expert teams of emergency physicians (p = 0.005). Interrater reliability for the GRS was high with a Kendall’s coefficient of concordance W ranging from 0.64 to 0.90 in 9 of 10 items and 0.88 in the overall score. Conclusion The GRS represents a promising novel tool to objectively assess technical skills in simulation training with high construct validity and interrater reliability in this pilot study.


2019 ◽  
Author(s):  
Jacek Chmielewski ◽  
Włodzimierz Łuczyński ◽  
Jakub Dobroch ◽  
Grzegorz Cebula ◽  
Tomasz Bielecki ◽  
...  

Abstract Background High fidelity medical simulations allow for teaching medical skills in safe and realistic conditions. Pediatric teams of emergency departments work under extreme stress, which affects high-level cognitive functions, specifically attention and memory, and increases the already high stakes for young doctors. Lapses in attention increase the risk of serious consequences such as medical errors, failure to recognize life-threatening signs and symptoms, and other essential patient safety issues. Mindfulness as a process of intentionally paying attention to each moment with curiosity, openness and acceptance of each experience without judgment can potentially contribute to improving the performance of medical teams in conditions of pediatric emergency. The aim of the study was to determine whether the actions of medical students in the course of pediatric high fidelity simulations are related to their mindfulness. Participants and methods A total of 166 standardized simulations were conducted among students of medicine in three simulation centers of medical universities, assessing: stress sensation, technical skills (checklists), non-technical skills (Ottawa Crisis Resource Management Global Rating Scale) and mindfulness using Five Facet Mindfulness Questionnaire. Results The perception of stress among students was lower and more motivating if they were more mindful. Mindfulness of students correlated positively with avoiding fixation error, but negatively with listening to and managing the team. The lowest scores among non-technical skills were noted in the area of ​​situational awareness (fixation error). In subsequent simulations of the same team, students' non-technical skills improved, although no change was noted in their technical skills. Conclusions The results of our research indicate the relationship between the mindfulness of medical students and their non-technical skills and the perception of stress in pediatric emergency simulations. Further research is needed to show whether mindfulness training leads to any changes in this field. Trial registration ClinicalTrials.gov, NCT03761355).


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Maxime Fieux ◽  
Antoine Gavoille ◽  
Fabien Subtil ◽  
Sophie Bartier ◽  
Stéphane Tringali

Abstract Background The ongoing COVID-19 pandemic has disrupted the surgical training of residents. There is a real concern that trainees will not be able to meet their training requirements. Low-fidelity surgical simulation appears to be an alternative for surgical training. The educational benefits of repeating ossiculoplasty simulations under a microscope have never been evaluated. With this study we aimed to evaluate the differences in performance scores and on a global rating scale before and after training on an ossiculoplasty simulator. Methods In this quasi-experimental, prospective, single-centre, before-after study with blinded rater evaluation, residents performed five microscopic ossiculoplasty tasks with a difficulty gradient (sliding beads onto rods, the insertion of a partial prosthesis, the insertion of a total prosthesis, and the insertion of a stapedotomy piston under microscopic or endoscopic surgery) before and after training on the same simulator. Performance scores were defined for each task, and total performance scores (score/min) were calculated. All data were collected prospectively. Results Six out of seven intermediate residents and 8/9 novices strongly agreed that the simulator was an effective training device and should be included in the ENT residency program. The mean effect of training was a significant increase in the total performance score (+ 0.52 points/min, [95 % CI, 0.40–0.64], p < 0.001), without a significant difference between novice and intermediate residents. Conclusions This preliminary study shows that techniques for middle-ear surgery can be acquired using a simulator, avoiding any risk for patients, even under lockdown measures.


Sign in / Sign up

Export Citation Format

Share Document