scholarly journals Development and validity evidence of an objective structured assessment of technical skills score for minimally invasive linear-stapled, hand-sewn intestinal anastomoses: the A-OSATS score

Author(s):  
Mona W. Schmidt ◽  
Caelan M. Haney ◽  
Karl-Friedrich Kowalewski ◽  
Vasile V. Bintintan ◽  
Mohammed Abu Hilal ◽  
...  

Abstract Introduction The aim of this study was to develop a reliable objective structured assessment of technical skills (OSATS) score for linear-stapled, hand-sewn closure of enterotomy intestinal anastomoses (A-OSATS). Materials and methods The Delphi methodology was used to create a traditional and weighted A-OSATS score highlighting the more important steps for patient outcomes according to an international expert consensus. Minimally invasive novices, intermediates, and experts were asked to perform a minimally invasive linear-stapled intestinal anastomosis with hand-sewn closure of the enterotomy in a live animal model either laparoscopically or robot-assisted. Video recordings were scored by two blinded raters assessing intrarater and interrater reliability and discriminative abilities between novices (n = 8), intermediates (n = 24), and experts (n = 8). Results The Delphi process included 18 international experts and was successfully completed after 4 rounds. A total of 4 relevant main steps as well as 15 substeps were identified and a definition of each substep was provided. A maximum of 75 points could be reached in the unweighted A-OSATS score and 170 points in the weighted A-OSATS score respectively. A total of 41 anastomoses were evaluated. Excellent intrarater (r = 0.807–0.988, p < 0.001) and interrater (intraclass correlation coefficient = 0.923–0.924, p < 0.001) reliability was demonstrated. Both versions of the A-OSATS correlated well with the general OSATS and discriminated between novices, intermediates, and experts defined by their OSATS global rating scale. Conclusion With the weighted and unweighted A-OSATS score, we propose a new reliable standard to assess the creation of minimally invasive linear-stapled, hand-sewn anastomoses based on an international expert consensus. Validity evidence in live animal models is provided in this study. Future research should focus on assessing whether the weighted A-OSATS exceeds the predictive capabilities of patient outcomes of the unweighted A-OSATS and provide further validity evidence on using the score on different anastomotic techniques in humans.

2017 ◽  
Vol 45 (4) ◽  
pp. 469-475 ◽  
Author(s):  
T. Jirativanont ◽  
K. Raksamani ◽  
N. Aroonpruksakul ◽  
P. Apidechakul ◽  
S. Suraseranivongse

We sought to evaluate the validity of two non-technical skills evaluation instruments, the Anaesthetists’ Non-Technical Skills (ANTS) behavioural marker system and the Ottawa Global Rating Scale (GRS), to apply them to anaesthesia training. The content validity, response process, internal structure, relations with other variables and consequences were described for validity evidence. Simulated crisis management sessions were initiated during which two trained raters evaluated the performance of postgraduate first-, second- and third-year (PGY-1, PGY-2 and PGY-3) anaesthesia residents. The study included 70 participants, composed of 24 PGY-1, 24 PGY-2 and 22 PGY-3 residents. Both instruments differentiated the non-technical skills of PGY-1 from PGY-3 residents (P <0.05). Inter-rater agreement was measured using the intraclass correlation coefficient. For the ANTS instrument, the intraclass correlation coefficients for task management, team-working, situation awareness and decision-making were 0.79, 0.34, 0.81 and 0.70, respectively. For the Ottawa GRS, the intraclass correlation coefficients for overall performance, leadership, problem-solving, situation awareness, resource utilisation and communication skills were 0.86, 0.83, 0.84, 0.87, 0.80 and 0.86, respectively. The Cronbach's alpha for internal consistency of the ANTS instrument was 0.93, and was 0.96 for the Ottawa GRS. There was a high correlation between the ANTS and Ottawa GRS. The raters reported the ease of use of the Ottawa GRS compared to the ANTS. We found sufficient evidence of validity in the ANTS instrument and the Ottawa GRS for the evaluation of non-technical skills in a simulated anaesthesia setting, but the Ottawa GRS was more practical and had higher reliability.


Author(s):  
M Stavrakas ◽  
G Menexes ◽  
S Triaridis ◽  
P Bamidis ◽  
J Constantinidis ◽  
...  

Abstract Objective This study developed an assessment tool that was based on the objective structured assessment for technical skills principles, to be used for evaluation of surgical skills in cortical mastoidectomy. The objective structured assessment of technical skill is a well-established tool for evaluation of surgical ability. This study also aimed to identify the best material and printing method to make a three-dimensional printed temporal bone model. Methods Twenty-four otolaryngologists in training were asked to perform a cortical mastoidectomy on a three-dimensional printed temporal bone (selective laser sintering resin). They were scored according to the objective structured assessment of technical skill in temporal bone dissection tool developed in this study and an already validated global rating scale. Results Two external assessors scored the candidates, and it was concluded that the objective structured assessment of technical skill in temporal bone dissection tool demonstrated some main aspects of validity and reliability that can be used in training and performance evaluation of technical skills in mastoid surgery. Conclusion Apart from validating the new tool for temporal bone dissection training, the study showed that evolving three-dimensional printing technologies is of high value in simulation training with several advantages over traditional teaching methods.


2020 ◽  
Vol 33 (11) ◽  
pp. 742
Author(s):  
Joana Fernandes Ribeiro ◽  
Manuel Rosete ◽  
Andreia Teixeira ◽  
Hugo Conceição ◽  
Lèlita Santos

Introduction: Technical skills training is fundamental for clinical practice although poorly emphasised in undergraduate medical curricula. In these circumstances, Peer Assisted Learning methodology has emerged as a valid alternative to overcome this insufficiency. The purpose of this study is to evaluate the impact on students of a Peer Assisted Learning program in basic surgical skills, regarding technical competences and knowledge improvement.Material and Methods: A total of 104 randomly selected third year medical students participated in a workshop delivered by fifth year students. From that total, 34 students were assessed before and after the workshop, using the Objective Structured Assessment of Technical Skills instrument, that consists of a global rating scale and a procedure-specific checklist. Sixth year students (control group) were also assessed in their performance without participating in the workshop. Before workshop versus after workshop Objective Structured Assessment of Technical Skills results were compared using Wilcoxon and McNemar tests. After workshop versus control group Objective Structured Assessment of Technical Skills results were compared using Mann-Whitney, qui-squared test and Fisher’s exact test.Results: For the global rating scale, students obtained an after the workshop score (29.5) that was significantly higher than the before the workshop score (15.5; p-value < 0.001), but no significant differences were found between after the workshop and control group scores (p-value = 0.167). For the procedure-specific checklist, 3rd year students had a substantial positive evolution in all parameters and obtained higher rates of correct achievements compared to the control group.Discussion: The final outcomes demonstrated a significant qualitative and quantitative improvement of knowledge and technical skills, which is in accordance with other literature.Conclusion: This Peer Assisted Learning program revealed promising results concerning improvement of surgical skills in medical students, with little staff faculty contribution and extension to a much broader number of students.


2016 ◽  
Vol 57 (1-2) ◽  
pp. 1-9 ◽  
Author(s):  
Felix Nickel ◽  
Jonathan D. Hendrie ◽  
Christian Stock ◽  
Mohamed Salama ◽  
Anas A. Preukschas ◽  
...  

Purpose: The validated Objective Structured Assessment of Technical Skills (OSATS) score is used for evaluating laparoscopic surgical performance. It consists of two subscores, a Global Rating Scale (GRS) and a Specific Technical Skills (STS) scale. The OSATS has accepted construct validity for direct observation ratings by experts to discriminate between trainees' levels of experience. Expert time is scarce. Endoscopic video recordings would facilitate assessment with the OSATS. We aimed to compare video OSATS with direct OSATS. Methods: We included 79 participants with different levels of experience [58 medical students, 15 junior residents (novices), and 6 experts]. Performance of a cadaveric porcine laparoscopic cholecystectomy (LC) was evaluated with OSATS by blinded expert raters by direct observation and then as an endoscopic video recording. Operative time was recorded. Results: Direct OSATS rating and video OSATS rating correlated significantly (ρ = 0.33, p = 0.005). Significant construct validity was found for direct OSATS in distinguishing between students or novices and experts. Students and novices were not different in direct OSATS or video OSATS. Mean operative times varied for students (73.4 ± 9.0 min), novices (65.2 ± 22.3 min), and experts (46.8 ± 19.9 min). Internal consistency was high between the GRS and STS subscores for both direct and video OSATS with Cronbach's α of 0.76 and 0.86, respectively. Video OSATS and operative time in combination was a better predictor of direct OSATS than each single parameter. Conclusion: Direct OSATS rating was better than endoscopic video rating for differentiating between students or novices and experts for LC and should remain the standard approach for the discrimination of experience levels. However, in the absence of experts for direct rating, video OSATS supplemented with operative time should be used instead of single parameters for predicting direct OSATS scores.


2015 ◽  
Vol 9 (1-2) ◽  
pp. 32 ◽  
Author(s):  
Laura Nguyen ◽  
Kim Tardioli ◽  
Matthew Roberts ◽  
James Watterson

Introduction: As residency training requirements increasingly emphasize a competency-based approach, novel tools to directly evaluate Canadian Medical Education Directives for Specialists (CanMEDS) competencies must be developed. Incorporating simulation allows residents to demonstrate knowledge and skills in a safe, standardized environment. We describe a novel hybrid simulation station for use in a urology resident in-training Objective Structured Clinical Exam (OSCE) to assess multiple CanMEDS competencies.Methods: An OSCE station was developed to assess Communicator, Health Advocate, Manager, and Medical Expert (including technical skills) CanMEDS roles. Residents interviewed a standardized patient, interacted with a nurse, performed flexible cystoscopy and attempted stent removal using a novel bladder/stent model. Communication was assessed using the Calgary-Cambridge Observational Guide, knowledge was assessed using a checklist, and technical skills were assessed using a previously validated global rating scale. Video debriefing allowed residents to review their performance. Face and discriminative validity were assessed, and feasibility was determined through qualitative post-examination interviews and cost analysis.Results: All 9 residents (postgraduate years [PGY] 3, 4, 5) completed the OSCE in 15 minutes. Communicator and knowledge scores were similar among all PGYs. Scores in technical skills were higher in PGY-5 compared with PGY-3/4 reside nts (mean score 79% vs. 73%). Residents and exam personnel felt the OSCE station allowed for realistic demonstration of competencies. Equipment cost was $218 for the exam station.Conclusions: We developed and implemented a hybrid simulation- based OSCE station to assess multiple CanMEDS roles. This approach was feasible and cost-effective; it also provided a framework for future development of similar OSCE stations to assess resident competencies across multiple domains.


2017 ◽  
Vol 32 (1) ◽  
pp. 526-535 ◽  
Author(s):  
May Liu ◽  
Shreya Purohit ◽  
Joshua Mazanetz ◽  
Whitney Allen ◽  
Usha S. Kreaden ◽  
...  

2010 ◽  
Vol 1 ◽  
pp. 37-41 ◽  
Author(s):  
Sarah E. Peyre ◽  
Heather MacDonald ◽  
Laila Al-Marayati ◽  
Claire Templeman ◽  
Laila I. Muderspach

Sign in / Sign up

Export Citation Format

Share Document