scholarly journals P057: Performance of a national simulation-based resuscitation OSCE for emergency medicine trainees

CJEM ◽  
2016 ◽  
Vol 18 (S1) ◽  
pp. S97-S98 ◽  
Author(s):  
C. Hagel ◽  
A.K. Hall ◽  
D. Klinger ◽  
G. McNeil ◽  
D. Dagnone

Introduction: The use of high-fidelity simulation is emerging as an effective method for competency-based assessment in postgraduate medical education. We have previously reported the development of the Queen’s Simulation Assessment Tool (QSAT), for use in simulation-based Objective Structured Clinical Examinations (OSCEs) for Emergency Medicine (EM) trainees. We aimed to demonstrate the feasibility and present an argument for the validity of a simulation-based OSCE utilizing the QSAT with EM residents from multiple Canadian training sites. Methods: EM post-graduate trainees (PGY 2-5) from 9 Canadian EM training programs participated in an 8-station simulation-based resuscitation OSCE at Queen’s University in Kingston, ON. Each station was scored by a single trained rater from a group of 9 expert Canadian EM physicians. Raters utilized a station-specific QSAT and provided an Entrustment Score. A post-examination questionnaire was administered to the trainees to quantify perceived realism, comfort and educational impact. Statistical analyses included analysis of variance to measure the discriminatory capabilities and a generalizability study to examine the sources of variability in the scores. Results: EM postgraduate trainees (N=36) participated in the study. Discriminatory validity was strong, with senior trainees (PGY4-5) outperforming junior trainees (PGY2-3) in 6 of 8 scenarios and in aggregated QSAT and Entrustment Scores across all 8 stations (p<0.01). Generalizability studies found the largest sources of random variability was due to the trainee by station interaction and the error term, with a G coefficient of 0.84. Resident trainees reported reasonable comfort being assessed in the simulation environment (3.6/5), indicated significant perceived realism (4.1/5), and found the OSCE valuable to their learning (4.8/5). Conclusion: Overall, this study demonstrates that a large-scale simulation-based EM resuscitation OSCE is feasible, and an argument has been presented for the validity of such an examination. The incorporation of simulation or a simulation-based OSCE in the national certification process in EM may help to satisfy the increased demand for competency-based assessment required by the Royal College of Physicians & Surgeons of Canada’s Competency by Design transition.

2016 ◽  
Vol 7 (1) ◽  
pp. e57-e67 ◽  
Author(s):  
J. Damon Dagnone ◽  
Andrew K. Hall ◽  
Stefanie Sebok-Syer ◽  
Don Klinger ◽  
Karen Woolfrey ◽  
...  

Background: The use of high-fidelity simulation is emerging as a desirable method for competency-based assessment in postgraduate medical education. We aimed to demonstrate the feasibility and validity of a multi-centre simulation-based Objective Structured Clinical Examination (OSCE) of resuscitation competence with Canadian Emergency Medicine (EM) trainees.Method: EM postgraduate trainees (n=98) from five Canadian academic centres participated in a high fidelity, 3-station simulation-based OSCE.  Expert panels of three emergency physicians evaluated trainee performances at each centre using the Queen’s Simulation Assessment Tool (QSAT).  Intraclass correlation coefficients were used to measure the inter-rater reliability, and analysis of variance was used to measure the discriminatory validity of each scenario.  A fully crossed generalizability study was also conducted for each examination centre.   Results: Inter-rater reliability in four of the five centres was strong with a median absolute intraclass correlation coefficient (ICC) across centres and scenarios of 0.89 [0.65-0.97]. Discriminatory validity was also strong (p < 0.001 for scenarios 1 and 3; p < 0.05 for scenario 2). Generalizability studies found significant variations at two of the study centres.Conclusions: This study demonstrates the successful pilot administration of a multi-centre, 3-station simulation-based OSCE for the assessment of resuscitation competence in post-graduate Emergency Medicine trainees.


CJEM ◽  
2020 ◽  
Vol 22 (S1) ◽  
pp. S48-S48
Author(s):  
T. Wawrykow ◽  
T. McColl ◽  
A. Velji ◽  
M. Chan

Introduction: The oral case presentation is recognized as a core educational and patient care activity but has not been well studied in the emergency setting. The objectives of this study are: 1) to develop a competency-based assessment tool to formally evaluate the emergency medicine oral case presentation (EM-OCP) competency of medical students and ‘transition to discipline’ residents, and 2) to develop, implement and evaluate a curriculum to enhance oral case presentation (OCP) communication skills in the emergency medicine (EM) setting. Methods: Using data from a literature review, a Canadian Association of Emergency Physicians national survey, and local focus groups, the authors designed an OCP framework, blended learning curriculum, and EM-OCP assessment tool. Ninety-six clerkship students were randomly assigned to receive either the control, the standard clerkship curriculum, or intervention, the blended learning curriculum. At the beginning of their emergency medicine rotation, learners completed a pre-test using a standardized patient (SP) case to assess their baseline OCP skills. The intervention group then completed the EM-OCP curriculum. All students completed post-tests with a different SP at the end of the six-week EM rotation. Audio-recordings of pre and post-tests were evaluated using the assessment tool by two blinded evaluators. Results: Using the Kruskal-Wallis test, all students demonstrated improvement in EM-OCP skills between their pre-test and post-test, however, those who received the blended learning curriculum showed significantly greater improvement in synthesis of information (p = 0.044), management (p = 0.006) and overall entrustment decision score (p = 0.000). Conclusion: Implementation of a novel EM-OCP curriculum resulted in more effective communication and higher entrustment scores. This curriculum could improve OCP performance not only in emergency medicine settings but also across specialties where medical students and residents must manage critical patients.


CJEM ◽  
2016 ◽  
Vol 18 (S1) ◽  
pp. S62-S62 ◽  
Author(s):  
L.B. Chartier ◽  
S. Vaillancourt ◽  
M. McGowan ◽  
K. Dainty ◽  
A.H. Cheng

Introduction: The Canadian Medical Education Directives for Specialists (CanMEDS) framework defines the competencies that postgraduate medical education programs must cover for resident physicians. The 2015 iteration of the CanMEDS framework emphasizes Quality Improvement and Patient Safety (QIPS), given their role in the provision of high value and cost-effective care. However, the opinion of Emergency Medicine (EM) program directors (PDs) regarding the need for QIPS curricula is unknown, as is the current level of knowledge of EM residents in QIPS principles. We therefore sought to determine the need for a QIPS curriculum for EM residents in a Canadian Royal College EM program. Methods: We developed a national multi-modal needs assessment. This included a survey of all Royal College EM residency PDs across Canada, as well as an evaluative assessment of baseline QIPS knowledge of 30 EM residents at the University of Toronto (UT). The resident evaluation was done using the validated Revised QI Knowledge Application Tool (QIKAT-R), which evaluates an individual’s ability to decipher a systematic quality problem from short clinical scenarios and to propose change initiatives for improvement. Results: Eight of the 13 (62%) PDs responded to the survey, unanimously agreeing that QIPS should be a formal part of residency training. However, challenges identified included the lack of qualified and available faculty to develop and teach QIPS material. 30 of 30 (100%) residents spanning three cohorts completed the QIKAT-R. Median overall score was 11 out of 27 points (IQR 9-14), demonstrating the lack of poor baseline QIPS knowledge amongst residents. Conclusion: QIPS is felt to be a necessary part of residency training, but the lack of available and qualified faculty makes developing and implementing such curriculum challenging. Residents at UT consistently performed poorly on a validated QIPS assessment tool, confirming the need for a formal QIPS curriculum. We are now developing a longitudinal, evidence-based QIPS curriculum that trains both residents and faculty to contribute to QI projects at the institution level.


Author(s):  
Rachel Han ◽  
Julia Keith ◽  
Elzbieta Slodkowska ◽  
Sharon Nofech-Mozes ◽  
Bojana Djordjevic ◽  
...  

Context.— Competency-based medical education relies on frequent formative in-service assessments to ascertain trainee progression. Currently at our institution, trainees receive a summative end-of-rotation In-Training Evaluation Report based on feedback collected from staff pathologists. There is no method of simulating report sign-out. Objective.— To develop a formative in-service assessment tool that is able to simulate report sign-out and provide case-by-case feedback to trainees. Further, to compare time- versus competency-based assessment models. Design.— Twenty-one pathology trainees were assessed for 20 months. Hot Seat Diagnosis by trainees and trainee assessment by pathologists were recorded in the Laboratory Information System. In the first iteration, trainees were assessed by using a time-based assessment scale on their ability to diagnose, report, use ancillary testings, comment on clinical implications, provide intraoperative consultation and/or gross cases. The second iteration used a competency-based assessment scale. Trainees and pathologists completed surveys on the effectiveness of the In-Training Evaluation Report versus the Hot Seat Diagnosis tool. Results.— Scores from both iterations correlated significantly with other assessment tools including the Resident In-Service Examination (r = 0.93, P = .04 and r = 0.87, P = .03). The competency-based model was better able to demonstrate improvement over time and stratify junior versus senior trainees than the time-based model. Trainees and pathologists rated Hot Seat Diagnosis as significantly more objective, detailed, and timely than the In-Training Evaluation Report, and effective at simulating report sign-out. Conclusions.— Hot Seat Diagnosis is an effective tool for the formative in-service assessment of pathology trainees and simulation of report sign-out, with the competency-based model outperforming the time-based model.


2019 ◽  
Vol 6 (6) ◽  
pp. 339-343
Author(s):  
Melinda Fleming ◽  
Michael McMullen ◽  
Theresa Beesley ◽  
Rylan Egan ◽  
Sean Field

IntroductionSimulation training in anaesthesiology bridges the gap between theory and practice by allowing trainees to engage in high-stakes clinical training without jeopardising patient safety. However, implementing simulation-based assessments within an academic programme is highly resource intensive, and the optimal number of scenarios and faculty required for accurate competency-based assessment remains to be determined. Using a generalisability study methodology, we examine the structure of simulation-based assessment in regard to the minimal number of scenarios and faculty assessors required for optimal competency-based assessments.MethodsSeventeen anaesthesiology residents each performed four simulations which were assessed by two expert raters. Generalisability analysis (G-analysis) was used to estimate the extent of variance attributable to (1) the scenarios, (2) the assessors and (3) the participants. The D-coefficient and the G-coefficient were used to determine accuracy targets and to predict the impact of adjusting the number of scenarios or faculty assessors.ResultsWe showed that multivariate G-analysis can be used to estimate the number of simulations and raters required to optimise assessment. In this study, the optimal balance was obtained when four scenarios were assessed by two simulation experts.ConclusionSimulation-based assessment is becoming an increasingly important tool for assessing the competency of medical residents in conjunction with other assessment methods. G-analysis can be used to assist in planning for optimal resource use and cost-efficacy.


2019 ◽  
Vol 11 (2) ◽  
pp. 168-176
Author(s):  
Zia Bismilla ◽  
Tehnaz Boyle ◽  
Karen Mangold ◽  
Wendy Van Ittersum ◽  
Marjorie Lee White ◽  
...  

ABSTRACT Background  The Accreditation Council for Graduate Medical Education (ACGME) Milestone projects required each specialty to identify essential skills and develop means of assessment with supporting validity evidence for trainees. Several specialties rate trainees on a milestone subcompetency related to working in interprofessional teams. A tool to assess trainee competence in any role on an interprofessional team in a variety of scenarios would be valuable and suitable for simulation-based assessment. Objective  We developed a tool for simulation settings that assesses interprofessional teamwork in trainees. Methods  In 2015, existing tools that assess teamwork or interprofessionalism using direct observation were systematically reviewed for appropriateness, generalizability, adaptability, ease of use, and resources required. Items from these tools were included in a Delphi method with multidisciplinary pediatrics experts using an iterative process from June 2016 to January 2017 to develop an assessment tool. Results  Thirty-one unique tools were identified. A 2-stage review narrowed this list to 5 tools, and 81 items were extracted. Twenty-two pediatrics experts participated in 4 rounds of Delphi surveys, with response rates ranging from 82% to 100%. Sixteen items reached consensus for inclusion in the final tool. A global 4-point rating scale from novice to proficient was developed. Conclusions  A novel tool to assess interprofessional teamwork for individual trainees in a simulated setting was developed using a systematic review and Delphi methodology. This is the first step to establish the validity evidence necessary to use this tool for competency-based assessment.


Sign in / Sign up

Export Citation Format

Share Document