An Electronic Competency-Based Evaluation Tool for Assessing Humanitarian Competencies in a Simulated Exercise

2017 ◽  
Vol 32 (3) ◽  
pp. 253-260 ◽  
Author(s):  
Andrea B. Evans ◽  
Jennifer M. Hulme ◽  
Peter Nugus ◽  
Hilarie H. Cranmer ◽  
Melanie Coutu ◽  
...  

AbstractMethodsThe evaluation tool was first derived from the formerly Consortium of British Humanitarian Agencies’ (CBHA; United Kingdom), now “Start Network’s,” Core Humanitarian Competency Framework and formatted in an electronic data capture tool that allowed for offline evaluation. During a 3-day humanitarian simulation event, participants in teams of eight to 10 were evaluated individually at multiple injects by trained evaluators. Participants were assessed on five competencies and a global rating scale. Participants evaluated both themselves and their team members using the same tool at the end of the simulation exercise (SimEx).ResultsAll participants (63) were evaluated. A total of 1,008 individual evaluations were completed. There were 90 (9.0%) missing evaluations. All 63 participants also evaluated themselves and each of their teammates using the same tool. Self-evaluation scores were significantly lower than peer-evaluations, which were significantly lower than evaluators’ assessments. Participants with a medical degree, and those with humanitarian work experience of one month or more, scored significantly higher on all competencies assessed by evaluators compared to other participants. Participants with prior humanitarian experience scored higher on competencies regarding operating safely and working effectively as a team member.ConclusionThis study presents a novel electronic evaluation tool to assess individual performance in five of six globally recognized humanitarian competency domains in a 3-day humanitarian SimEx. The evaluation tool provides a standardized approach to the assessment of humanitarian competencies that cannot be evaluated through knowledge-based testing in a classroom setting. When combined with testing knowledge-based competencies, this presents an approach to a comprehensive competency-based assessment that provides an objective measurement of competency with respect to the competencies listed in the Framework. There is an opportunity to advance the use of this tool in future humanitarian training exercises and potentially in real time, in the field. This could impact the efficiency and effectiveness of humanitarian operations.EvansAB, HulmeJM, NugusP, CranmerHH, CoutuM, JohnsonK. An electronic competency-based evaluation tool for assessing humanitarian competencies in a simulated exercise. Prehosp Disaster Med. 2017;32(3):253–260.

2017 ◽  
Vol 45 (9) ◽  
pp. 2125-2130 ◽  
Author(s):  
Lisa Phillips ◽  
Jeffrey J.H. Cheung ◽  
Daniel B. Whelan ◽  
Michael Lucas Murnaghan ◽  
Jas Chahal ◽  
...  

Background: Arthroscopic hip labral repair is a technically challenging and demanding surgical technique with a steep learning curve. Arthroscopic simulation allows trainees to develop these skills in a safe environment. Purpose: The purpose of this study was to evaluate the use of a combination of assessment ratings for the performance of arthroscopic hip labral repair on a dry model. Study Design: Cross-sectional study; Level of evidence, 3. Methods: A total of 47 participants including orthopaedic surgery residents (n = 37), sports medicine fellows (n = 5), and staff surgeons (n = 5) performed arthroscopic hip labral repair on a dry model. Prior arthroscopic experience was noted. Participants were evaluated by 2 orthopaedic surgeons using a task-specific checklist, the Arthroscopic Surgical Skill Evaluation Tool (ASSET), task completion time, and a final global rating scale. All procedures were video-recorded and scored by an orthopaedic fellow blinded to the level of training of each participant. Results: The internal consistency/reliability (Cronbach alpha) using the total ASSET score for the procedure was high (intraclass correlation coefficient > 0.9). One-way analysis of variance for the total ASSET score demonstrated a difference between participants based on the level of training ( F3,43 = 27.8, P < .001). A good correlation was seen between the ASSET score and previous exposure to arthroscopic procedures ( r = 0.52-0.73, P < .001). The interrater reliability for the ASSET score was excellent (>0.9). Conclusion: The results of this study demonstrate that the use of dry models to assess the performance of arthroscopic hip labral repair by trainees is both valid and reliable. Further research will be required to demonstrate a correlation with performance on cadaveric specimens or in the operating room.


2015 ◽  
Vol 9 (1-2) ◽  
pp. 32 ◽  
Author(s):  
Laura Nguyen ◽  
Kim Tardioli ◽  
Matthew Roberts ◽  
James Watterson

Introduction: As residency training requirements increasingly emphasize a competency-based approach, novel tools to directly evaluate Canadian Medical Education Directives for Specialists (CanMEDS) competencies must be developed. Incorporating simulation allows residents to demonstrate knowledge and skills in a safe, standardized environment. We describe a novel hybrid simulation station for use in a urology resident in-training Objective Structured Clinical Exam (OSCE) to assess multiple CanMEDS competencies.Methods: An OSCE station was developed to assess Communicator, Health Advocate, Manager, and Medical Expert (including technical skills) CanMEDS roles. Residents interviewed a standardized patient, interacted with a nurse, performed flexible cystoscopy and attempted stent removal using a novel bladder/stent model. Communication was assessed using the Calgary-Cambridge Observational Guide, knowledge was assessed using a checklist, and technical skills were assessed using a previously validated global rating scale. Video debriefing allowed residents to review their performance. Face and discriminative validity were assessed, and feasibility was determined through qualitative post-examination interviews and cost analysis.Results: All 9 residents (postgraduate years [PGY] 3, 4, 5) completed the OSCE in 15 minutes. Communicator and knowledge scores were similar among all PGYs. Scores in technical skills were higher in PGY-5 compared with PGY-3/4 reside nts (mean score 79% vs. 73%). Residents and exam personnel felt the OSCE station allowed for realistic demonstration of competencies. Equipment cost was $218 for the exam station.Conclusions: We developed and implemented a hybrid simulation- based OSCE station to assess multiple CanMEDS roles. This approach was feasible and cost-effective; it also provided a framework for future development of similar OSCE stations to assess resident competencies across multiple domains.


CJEM ◽  
2008 ◽  
Vol 10 (01) ◽  
pp. 44-50 ◽  
Author(s):  
Glen Bandiera ◽  
David Lendrum

ABSTRACTObjective:We sought to determine if a novel competency-based daily encounter card (DEC) that was designed to minimize leniency bias and maximize independent competency assessments could address the limitations of existing feedback mechanisms when applied to an emergency medicine rotation.Methods:Learners in 2 tertiary academic emergency departments (EDs) presented a DEC to their teachers after each shift. DECs included dichotomous categorical rating scales (i.e., “needs attention” or “area of strength”) for each of the 7 CanMEDS roles or competencies and an overall global rating scale. Teachers were instructed to choose which of the 7 competencies they wished to evaluate on each shift. Results were analyzed using both staff and resident as the units of analysis.Results:Fifty-four learners submitted a total of 801 DECs that were then completed by 43 different teachers over 28 months. Teachers' patterns of selecting CanMEDS competencies to assess did not differ between the 2 sites. Teachers selected an average of 3 roles per DEC (range 0–7). Only 1.3% were rated as “needs further attention.” The frequency with which each competency was selected ranged from 25% (Health Advocate) to 85% (Medical Expert).Conclusion:Teachers chose to direct feedback toward a breadth of competencies. They provided feedback on all 7 CanMEDS roles in the ED, yet demonstrated a marked leniency bias.


2018 ◽  
Vol 13 (4) ◽  
pp. e10-e16
Author(s):  
Patrice Chrétien Raymer ◽  
Jean-Paul Makhzoum ◽  
Robert Gagnon ◽  
Arielle Levy ◽  
Jean-Pascal Costa

Introduction: High-fidelity simulation is an efficient and holistic teaching method. However, assessing simulation performances remains a challenge. We aimed to develop a CanMEDS competency-based global rating scale for internal medicine trainees during simulated acute care scenarios. Methods: Our scale was developed using a formal Delphi process. Validity was tested using six videotaped scenarios of two residents managing unstable atrial fibrillation, rated by 6 experts. Psychometric properties were determined using a G-study and a satisfaction questionnaire.Results: Most evaluators favorably rated the usability of our scale, and attested that the tool fully covered CanMEDS competencies. The scale showed low to intermediate generalization validity.Conclusions: This study demonstrated some validity arguments for our scale. The best assessed aspect of performance was communication; further studies are planned to gather further validity arguments for our scale and to compare assessment of teamwork and communication during scenarios with multiple versus single residents.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Maxime Fieux ◽  
Antoine Gavoille ◽  
Fabien Subtil ◽  
Sophie Bartier ◽  
Stéphane Tringali

Abstract Background The ongoing COVID-19 pandemic has disrupted the surgical training of residents. There is a real concern that trainees will not be able to meet their training requirements. Low-fidelity surgical simulation appears to be an alternative for surgical training. The educational benefits of repeating ossiculoplasty simulations under a microscope have never been evaluated. With this study we aimed to evaluate the differences in performance scores and on a global rating scale before and after training on an ossiculoplasty simulator. Methods In this quasi-experimental, prospective, single-centre, before-after study with blinded rater evaluation, residents performed five microscopic ossiculoplasty tasks with a difficulty gradient (sliding beads onto rods, the insertion of a partial prosthesis, the insertion of a total prosthesis, and the insertion of a stapedotomy piston under microscopic or endoscopic surgery) before and after training on the same simulator. Performance scores were defined for each task, and total performance scores (score/min) were calculated. All data were collected prospectively. Results Six out of seven intermediate residents and 8/9 novices strongly agreed that the simulator was an effective training device and should be included in the ENT residency program. The mean effect of training was a significant increase in the total performance score (+ 0.52 points/min, [95 % CI, 0.40–0.64], p < 0.001), without a significant difference between novice and intermediate residents. Conclusions This preliminary study shows that techniques for middle-ear surgery can be acquired using a simulator, avoiding any risk for patients, even under lockdown measures.


Author(s):  
M Stavrakas ◽  
G Menexes ◽  
S Triaridis ◽  
P Bamidis ◽  
J Constantinidis ◽  
...  

Abstract Objective This study developed an assessment tool that was based on the objective structured assessment for technical skills principles, to be used for evaluation of surgical skills in cortical mastoidectomy. The objective structured assessment of technical skill is a well-established tool for evaluation of surgical ability. This study also aimed to identify the best material and printing method to make a three-dimensional printed temporal bone model. Methods Twenty-four otolaryngologists in training were asked to perform a cortical mastoidectomy on a three-dimensional printed temporal bone (selective laser sintering resin). They were scored according to the objective structured assessment of technical skill in temporal bone dissection tool developed in this study and an already validated global rating scale. Results Two external assessors scored the candidates, and it was concluded that the objective structured assessment of technical skill in temporal bone dissection tool demonstrated some main aspects of validity and reliability that can be used in training and performance evaluation of technical skills in mastoid surgery. Conclusion Apart from validating the new tool for temporal bone dissection training, the study showed that evolving three-dimensional printing technologies is of high value in simulation training with several advantages over traditional teaching methods.


2012 ◽  
Vol 4 (1) ◽  
pp. 16-21 ◽  
Author(s):  
Antonia C. Hoyle ◽  
Christopher Whelton ◽  
Rowena Umaar ◽  
Lennard Funk

Sign in / Sign up

Export Citation Format

Share Document