Development of a Simulation-Based Interprofessional Teamwork Assessment Tool

Author(s):  
Zia Bismilla ◽  
Tehnaz Boyle ◽  
Karen Mangold ◽  
Wendy Van Ittersum ◽  
Marjorie Lee White ◽  
...  

2021 ◽  
Vol 8 ◽  
pp. 238212052110424
Author(s):  
Brittany J Daulton ◽  
Laura Romito ◽  
Zach Weber ◽  
Jennifer Burba ◽  
Rami A Ahmed

There are a very limited number of instruments to assess individual performance in simulation-based interprofessional education (IPE). The purpose of this study was to apply the Simulation-Based Interprofessional Teamwork Assessment Tool (SITAT) to the individualized assessment of medicine, pharmacy, and nursing students (N = 94) in a team-based IPE simulation, as well as to explore potential differences between disciplines, and calculate reliability estimates for utilization of the tool. Results of an analysis of variance provided evidence that there was no statistically significant difference among professions on overall competency ( F(2, 91)  =  0.756, P  = .472). The competency reports for nursing ( M = 3.06, SD = 0.45), medicine ( M = 3.19, SD = 0.42), and pharmacy ( M = 3.08, SD = 0.49) students were comparable across professions. Cronbach's alpha provided a reliability estimate of the tool, with evidence of high internal consistency ( α = .92). The interrater reliability of the SITAT was also investigated. There was moderate absolute agreement across the 3 faculty raters using the 2-way mixed model design and “average” unit (kappa = 0.536, P = .000, 95% CI [0.34, 0.68]). The novel SITAT demonstrates internal consistency and interrater reliability when used for evaluation of individual performance during IPE simulation. The SITAT provides value in the education and evaluation of individual students engaged in IPE curriculum.



2019 ◽  
Vol 11 (2) ◽  
pp. 168-176
Author(s):  
Zia Bismilla ◽  
Tehnaz Boyle ◽  
Karen Mangold ◽  
Wendy Van Ittersum ◽  
Marjorie Lee White ◽  
...  

ABSTRACT Background  The Accreditation Council for Graduate Medical Education (ACGME) Milestone projects required each specialty to identify essential skills and develop means of assessment with supporting validity evidence for trainees. Several specialties rate trainees on a milestone subcompetency related to working in interprofessional teams. A tool to assess trainee competence in any role on an interprofessional team in a variety of scenarios would be valuable and suitable for simulation-based assessment. Objective  We developed a tool for simulation settings that assesses interprofessional teamwork in trainees. Methods  In 2015, existing tools that assess teamwork or interprofessionalism using direct observation were systematically reviewed for appropriateness, generalizability, adaptability, ease of use, and resources required. Items from these tools were included in a Delphi method with multidisciplinary pediatrics experts using an iterative process from June 2016 to January 2017 to develop an assessment tool. Results  Thirty-one unique tools were identified. A 2-stage review narrowed this list to 5 tools, and 81 items were extracted. Twenty-two pediatrics experts participated in 4 rounds of Delphi surveys, with response rates ranging from 82% to 100%. Sixteen items reached consensus for inclusion in the final tool. A global 4-point rating scale from novice to proficient was developed. Conclusions  A novel tool to assess interprofessional teamwork for individual trainees in a simulated setting was developed using a systematic review and Delphi methodology. This is the first step to establish the validity evidence necessary to use this tool for competency-based assessment.



2018 ◽  
Vol 13 (1) ◽  
pp. 99-106 ◽  
Author(s):  
Maria Cecilie Havemann ◽  
Torur Dalsgaard ◽  
Jette Led Sørensen ◽  
Kristin Røssaak ◽  
Steffen Brisling ◽  
...  


2020 ◽  
Vol 134 (5) ◽  
pp. 415-418 ◽  
Author(s):  
R Bannon ◽  
K E Stewart ◽  
M Bannister

AbstractObjectivesThis study aimed to assess the published literature on non-technical skills in otolaryngology surgery and examine the applicability of any research to others’ practice, and to explore how the published literature can identify areas for further development and guide future research.MethodsA systematic review was conducted using the following key words: ‘otolaryngology’, ‘otorhinolaryngology’, ‘ENT’, ‘ENT surgery’, ‘ear, nose and throat surgery’, ‘head and neck surgery’, ‘thyroid surgery’, ‘parathyroid surgery’, ‘otology’, ‘rhinology’, ‘laryngology’ ‘skull base surgery’, ‘airway surgery’, ‘non-technical skills’, ‘non technical skills for surgeons’, ‘NOTSS’, ‘behavioural markers’ and ‘behavioural assessment tool’.ResultsThree publications were included in the review – 1 randomised, controlled trial and 2 cohort studies – involving 78 participants. All were simulation-based studies involving training otolaryngology surgeons.ConclusionLittle research has been undertaken on non-technical skills in otolaryngology. Training surgeons’ non-technical skill levels are similar across every tested aspect. The research already performed can guide further studies, particularly amongst non-training otolaryngology surgeons and in both emergency and elective non-simulated environments.



2016 ◽  
Vol 2 (3) ◽  
pp. 61-67 ◽  
Author(s):  
Jane Runnacles ◽  
Libby Thomas ◽  
James Korndorffer ◽  
Sonal Arora ◽  
Nick Sevdalis

IntroductionDebriefing is essential to maximise the simulation-based learning experience, but until recently, there was little guidance on an effective paediatric debriefing. A debriefing assessment tool, Objective Structured Assessment of Debriefing (OSAD), has been developed to measure the quality of feedback in paediatric simulation debriefings. This study gathers and evaluates the validity evidence of OSAD with reference to the contemporary hypothesis-driven approach to validity.MethodsExpert input on the paediatric OSAD tool from 10 paediatric simulation facilitators provided validity evidence based on content and feasibility (phase 1). Evidence for internal structure validity was sought by examining reliability of scores from video ratings of 35 postsimulation debriefings; and evidence for validity based on relationship to other variables was sought by comparing results with trainee ratings of the same debriefings (phase 2).ResultsSimulation experts’ scores were significantly positive regarding the content of OSAD and its instructions. OSAD's feasibility was demonstrated with positive comments regarding clarity and application. Inter-rater reliability was demonstrated with intraclass correlations above 0.45 for 6 of the 7 dimensions of OSAD. The internal consistency of OSAD (Cronbach α) was 0.78. Pearson correlation of trainee total score with OSAD total score was 0.82 (p<0.001) demonstrating validity evidence based on relationships to other variables.ConclusionThe paediatric OSAD tool provides a structured approach to debriefing, which is evidence-based, has multiple sources of validity evidence and is relevant to end-users. OSAD may be used to improve the quality of debriefing after paediatric simulations.



2015 ◽  
Vol 40 (3) ◽  
pp. 290-291 ◽  
Author(s):  
Karthikeyan Kallidaikurichi Srinivasan ◽  
George Shorten


2021 ◽  
Author(s):  
Mindy Ju ◽  
Naike Bochatay ◽  
Kathryn Robertson ◽  
James Frank ◽  
Bridget O’Brien ◽  
...  

Abstract Background: Despite the widespread adoption of interprofessional simulation-based education (IPSE) in healthcare as a means to optimize interprofessional teamwork, data suggest that IPSE may not achieve these intended goals due to a gap between the ideals and the realities of implementation. Methods: We conducted a qualitative case study that used the framework method to understand what and how core principles from guidelines for interprofessional education (IPE) and simulation-based education (SBE) were implemented in existing in situ IPSE programs. We observed simulation sessions and interviewed facilitators and directors at seven programs. Results: We found considerable variability in how IPSE programs apply and implement core principles derived from IPE and SBE guidelines with some principles applied by most programs (e.g., “active learning”, “psychological safety”, “feedback during debriefing”) and others rarely applied (e.g., “interprofessional competency-based assessment”, “repeated and distributed practice”). Through interviews we identified that buy-in, resources, lack of outcome measures, and power discrepancies influenced the extent to which principles were applied. Conclusion: To achieve IPSE’s intended goals of optimizing interprofessional teamwork, programs should transition from designing for the ideal of IPSE to realities of IPSE implementation.



2019 ◽  
Author(s):  
Claudia Behrens ◽  
DIANA H. DOLMANS ◽  
GERARD J. GORMLEY ◽  
Erik Driessen

Abstract Background Simulation based learning (SBL) has increased in its use to best equip students for clinical practice. Simulations that mirror the complex realities of clinical practice have the potential to induce a range of emotions, without a clear understanding of their impact on learning and the learner. Students’ emotional states have important effects on their learning process that can be either positive or negative, and are often difficult to predict. We aimed to determine: (1) To what extent achievement emotions are experienced by medical students during a complex simulation based learning activity, i.e. a ward round simulation (WRS). (2) what their performance scores are and too which extent performance scores do correlate with emotions and 3) how these emotions are perceived to impact learning. Methods A mixed methods approach was used in this study. Using an Achievement Emotion Questionnaire, we explored undergraduate medical student’s emotions as they participated in a complex ward round-based simulation. Their performance was rated using an observational ward round assessment tool and correlated with emotions scores. Six focus groups were conducted to provide a deeper understanding of their emotional and learning experiences. Results Students experienced a range of emotions during the simulation, they felt proud, enjoyed the simulation and performed well. Students felt proud because they could show in the complex simulation what they had learned so far. Students reported moderate levels of anxiety and low levels of frustration and shame. We found non-significant correlations between achievement emotions and performance during ward round simulation. Conclusions Placing undergraduate students in high complex simulations that they can handle raises positive academic achievement emotions which seem to support students’ learning and motivation.



2021 ◽  
Vol 8 (3) ◽  
pp. 193-208
Author(s):  
Agezegn Asegid ◽  
Nega Assefa

Abstract Objective To summarize and produce aggregated evidence on the effect of simulation-based teaching on skill performance in the nursing profession. Simulation is an active learning strategy involving the use of various resources to assimilate the real situation. It enables learners to improve their skills and knowledge in a coordinated environment. Methods Systematic literature search of original research articles was carried out through Google Scholar, Medline, and Cochrane Cumulative Index to Nursing and Allied Health Literature (CINAHL) databases. Studies conducted on simulation-based teaching and skill performance among nursing students or clinical nursing staff from 2010 to 2019, and published in the English language, were included in this study. Methodological quality was assessed by Joanna Briggs Institute, and the risk of bias was also assessed by Cochrane risk of bias and the risk of bias assessment tool for non-randomized studies (ROBINS-I) checklists. Results Initially, 638 titles were obtained from 3 sources, and 24 original studies with 2209 study participants were taken for the final analysis. Of the total studies, 14 (58.3%) used single group prep post design, 7 (29.1%) used high fidelity simulator (HFS), and 7 (29.1%) used a virtual simulator (VS). Twenty (83.3%) studies reported improved skill performance following simulation-based teaching. Simulation-based teaching improves skill performance among types of groups (single or double), study regions, high fidelity (HF), low fidelity (LF), and standard patient (SP) users. But the effect over virtual and medium fidelity simulators was not statistically significant. Overall, simulation-based teaching improves the skill performance score among the experimental group (d = 1.01, 95% confidence interval [CI] [0.69–1.33], Z = 6.18, P < 0.01, 93.9%). Significant heterogeneity and publication bias were observed during the pooled analysis. Conclusions Simulation did improve skill performance among the intervention groups, but the conclusion is uncertain due to the significant heterogeneity. The large extent of difference among original research has necessitated the development of well-defined assessment methods for skills and standardized simulation set-up for proper assessment of their effects.



2020 ◽  
Vol 08 (06) ◽  
pp. E783-E791
Author(s):  
Andreas Slot Vilmann ◽  
Christian Lachenmeier ◽  
Morten Bo Søndergaard Svendsen ◽  
Bo Søndergaard ◽  
Yoon Soo Park ◽  
...  

Abstract Background and study aims Patient safety during a colonoscopy highly depends on endoscopist competence. Endoscopic societies have been calling for an objective and regular assessment of the endoscopists, but existing assessment tools are time-consuming and prone to bias. We aimed to develop and gather evidence of validity for a computerized assessment tool delivering automatic and unbiased assessment of colonoscopy based on 3 dimensional coordinates from the colonoscope. Methods Twenty-four participants were recruited and divided into two groups based on experience: 12 experienced and 12 novices. Participants performed twice on a physical phantom model with a standardized alpha loop in the sigmoid colon. Data was gathered directly from the Olympus ScopeGuide system providing XYZ-coordinates along the length of the colonoscope. Five different motor skill measures were developed based on the data, named: Travel Length, Tip Progression, Chase Efficiency, Shaft movement without tip progression, and Looping. Results The experinced had a lower travel length (P < 0.001), tip progression (P < 0.001), chase efficiency (P = 0.001) and looping (P = 0.006), and a higher shaft movement without tip progression (P < 0.001) reaching the cecum compared with the novices. A composite score was developed based on the five measurements to create a combined score of progression, the 3D-Colonoscopy-Progression-Score (3D-CoPS). The 3D-CoPS revealed a significant difference between groups (experienced: 0.495 (SD 0.303) and novices –0.454 (SD 0.707), P < 0.001). Conclusion This study presents a novel, real-time computerized assessment tool for colonoscopy, and strong evidence of validity was gathered in a simulation-based setting. The system shows promising opportunities for automatic, unbiased and continuous assessment of colonoscopy performance.



Sign in / Sign up

Export Citation Format

Share Document