Validity Evidence for the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT)

2016 ◽  
Vol 13 (1) ◽  
pp. 60-68 ◽  
Author(s):  
Gerben E. Breimer ◽  
Faizal A. Haji ◽  
Giuseppe Cinalli ◽  
Eelco W. Hoving ◽  
James M. Drake

Abstract BACKGROUND: Growing demand for transparent and standardized methods for evaluating surgical competence prompted the construction of the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). OBJECTIVE: To provide validity evidence of the NEVAT by reporting on the tool's internal structure and its relationship with surgical expertise during simulation-based training. METHODS: The NEVAT was used to assess performance of trainees and faculty at an international neuroendoscopy workshop. All participants performed an endoscopic third ventriculostomy (ETV) on a synthetic simulator. Participants were simultaneously scored by 2 raters using the NEVAT procedural checklist and global rating scale (GRS). Evidence of internal structure was collected by calculating interrater reliability and internal consistency of raters' scores. Evidence of relationships with other variables was collected by comparing the ETV performance of experts, experienced trainees, and novices using Jonckheere's test (evidence of construct validity). RESULTS: Thirteen experts, 11 experienced trainees, and 10 novices participated. The interrater reliability by the intraclass correlation coefficient for the checklist and GRS was 0.82 and 0.94, respectively. Internal consistency (Cronbach's α) for the checklist and the GRS was 0.74 and 0.97, respectively. Median scores with interquartile range on the checklist and GRS for novices, experienced trainees, and experts were 0.69 (0.58-0.86), 0.85 (0.63-0.89), and 0.85 (0.81-0.91) and 3.1 (2.5-3.8), 3.7 (2.2-4.3) and 4.6 (4.4-4.9), respectively. Jonckheere's test showed that the median checklist and GRS score increased with performer expertise (P = .04 and .002, respectively). CONCLUSION: This study provides validity evidence for the NEVAT to support its use as a standardized method of evaluating neuroendoscopic competence during simulation-based training.

Author(s):  
M Stavrakas ◽  
G Menexes ◽  
S Triaridis ◽  
P Bamidis ◽  
J Constantinidis ◽  
...  

Abstract Objective This study developed an assessment tool that was based on the objective structured assessment for technical skills principles, to be used for evaluation of surgical skills in cortical mastoidectomy. The objective structured assessment of technical skill is a well-established tool for evaluation of surgical ability. This study also aimed to identify the best material and printing method to make a three-dimensional printed temporal bone model. Methods Twenty-four otolaryngologists in training were asked to perform a cortical mastoidectomy on a three-dimensional printed temporal bone (selective laser sintering resin). They were scored according to the objective structured assessment of technical skill in temporal bone dissection tool developed in this study and an already validated global rating scale. Results Two external assessors scored the candidates, and it was concluded that the objective structured assessment of technical skill in temporal bone dissection tool demonstrated some main aspects of validity and reliability that can be used in training and performance evaluation of technical skills in mastoid surgery. Conclusion Apart from validating the new tool for temporal bone dissection training, the study showed that evolving three-dimensional printing technologies is of high value in simulation training with several advantages over traditional teaching methods.


2021 ◽  
Vol 8 ◽  
pp. 238212052110424
Author(s):  
Brittany J Daulton ◽  
Laura Romito ◽  
Zach Weber ◽  
Jennifer Burba ◽  
Rami A Ahmed

There are a very limited number of instruments to assess individual performance in simulation-based interprofessional education (IPE). The purpose of this study was to apply the Simulation-Based Interprofessional Teamwork Assessment Tool (SITAT) to the individualized assessment of medicine, pharmacy, and nursing students (N = 94) in a team-based IPE simulation, as well as to explore potential differences between disciplines, and calculate reliability estimates for utilization of the tool. Results of an analysis of variance provided evidence that there was no statistically significant difference among professions on overall competency ( F(2, 91)  =  0.756, P  = .472). The competency reports for nursing ( M = 3.06, SD = 0.45), medicine ( M = 3.19, SD = 0.42), and pharmacy ( M = 3.08, SD = 0.49) students were comparable across professions. Cronbach's alpha provided a reliability estimate of the tool, with evidence of high internal consistency ( α = .92). The interrater reliability of the SITAT was also investigated. There was moderate absolute agreement across the 3 faculty raters using the 2-way mixed model design and “average” unit (kappa = 0.536, P = .000, 95% CI [0.34, 0.68]). The novel SITAT demonstrates internal consistency and interrater reliability when used for evaluation of individual performance during IPE simulation. The SITAT provides value in the education and evaluation of individual students engaged in IPE curriculum.


2019 ◽  
Vol 11 (2) ◽  
pp. 168-176
Author(s):  
Zia Bismilla ◽  
Tehnaz Boyle ◽  
Karen Mangold ◽  
Wendy Van Ittersum ◽  
Marjorie Lee White ◽  
...  

ABSTRACT Background  The Accreditation Council for Graduate Medical Education (ACGME) Milestone projects required each specialty to identify essential skills and develop means of assessment with supporting validity evidence for trainees. Several specialties rate trainees on a milestone subcompetency related to working in interprofessional teams. A tool to assess trainee competence in any role on an interprofessional team in a variety of scenarios would be valuable and suitable for simulation-based assessment. Objective  We developed a tool for simulation settings that assesses interprofessional teamwork in trainees. Methods  In 2015, existing tools that assess teamwork or interprofessionalism using direct observation were systematically reviewed for appropriateness, generalizability, adaptability, ease of use, and resources required. Items from these tools were included in a Delphi method with multidisciplinary pediatrics experts using an iterative process from June 2016 to January 2017 to develop an assessment tool. Results  Thirty-one unique tools were identified. A 2-stage review narrowed this list to 5 tools, and 81 items were extracted. Twenty-two pediatrics experts participated in 4 rounds of Delphi surveys, with response rates ranging from 82% to 100%. Sixteen items reached consensus for inclusion in the final tool. A global 4-point rating scale from novice to proficient was developed. Conclusions  A novel tool to assess interprofessional teamwork for individual trainees in a simulated setting was developed using a systematic review and Delphi methodology. This is the first step to establish the validity evidence necessary to use this tool for competency-based assessment.


2010 ◽  
Vol 112 (4) ◽  
pp. 985-992 ◽  
Author(s):  
Heinz R. Bruppacher ◽  
Syed K. Alam ◽  
Vicki R. LeBlanc ◽  
David Latter ◽  
Viren N. Naik ◽  
...  

Background Simulation-based training is useful in improving physicians' skills. However, no randomized controlled trials have been able to demonstrate the effects of simulation teaching in real-life patient care. This study aimed to determine whether simulation-based training or an interactive seminar resulted in better patient care during weaning from cardiopulmonary bypass (CPB)-a high stakes clinical setting. Methods This study was conducted as a prospective, single-blinded, randomized controlled trial. After institutional research board approval, 20 anesthesiology trainees, postgraduate year 4 or higher, inexperienced in CPB weaning, and 60 patients scheduled for elective coronary artery bypass grafting were recruited. Each trainee received a teaching syllabus for CPB weaning 1 week before attempting to wean a patient from CPB (pretest). One week later, each trainee received a 2-h training session with either high-fidelity simulation-based training or a 2-h interactive seminar. Each trainee then weaned patients from CPB within 2 weeks (posttest) and 5 weeks (retention test) from the intervention. Clinical performance was measured using the validated Anesthesiologists' Nontechnical Skills Global Rating Scale and a checklist of expected clinical actions. Results Pretest Global Rating Scale and checklist performances were similar. The simulation group scored significantly higher than the seminar group at both posttest (Global Rating Scale [mean +/- standard error]: 14.3 +/- 0.41 vs. 11.8 +/- 0.41, P < 0.001; checklist: 89.9 +/- 3.0% vs. 75.4 +/- 3.0%, P = 0.003) and retention test (Global Rating Scale: 14.1 +/- 0.41 vs. 11.7 +/- 0.41, P < 0.001; checklist: 93.2 +/- 2.4% vs. 77.0 +/- 2.4%, P < 0.001). Conclusion Skills required to wean a patient from CPB can be acquired through simulation-based training. Compared with traditional interactive seminars, simulation-based training leads to improved performance in patient care by senior trainees in anesthesiology.


2021 ◽  
Author(s):  
Michele Toussaint

Simulation-based practices are widely utilized in medical education and are known to be a safe and effective way to train and assess learners, improve provider confidence and competency, and improve patient safety. Competency-based initiatives are being more broadly utilized to assess learner proficiency in health professions education. Recent publication of competencies expected of new graduate physician assistants, and updated accreditation requirements which include assessment of learner competencies in non-knowledge based domains, have led to the creation of this simulation-based summative assessment of learner competency in communication and patient care skills for Physician Assistant students. The purpose of this quantitative study was to identify if this simulation assessment had appropriate construct validity and rater consistency, and to identify if correlation existed between learner performance on the simulation exam and in required Supervised Clinical Training Experiences for measures of communication skills and patient care skills. While raters for the simulation assessment had minimal variability, measures of internal consistency did not achieve suitable thresholds for patient care skills. Communication skills assessment was able to achieve the minimum suitable threshold for internal consistency with minor revisions. No correlation was noted between exam performance for communication skills or patient care skills and clinical practice ratings. Several key areas exist which may explain these results including the rating scale for the simulation exam which utilized checklists and not global rating scales, faculty raters with broad and diverse clinical backgrounds, observation-related factors on the part of the student, and the high-complexity and multidimensional nature of provider-patient interactions.


2007 ◽  
Vol 106 (5) ◽  
pp. 907-915 ◽  
Author(s):  
Pamela J. Morgan ◽  
Richard Pittini ◽  
Glenn Regehr ◽  
Carol Marrs ◽  
Michèle F. Haley

Background The National Confidential Enquiry into Maternal Deaths identified "lack of communication and teamwork" as a leading cause of substandard obstetric care. The authors used high-fidelity simulation to present obstetric scenarios for team assessment. Methods Obstetric nurses, physicians, and resident physicians were repeatedly assigned to teams of five or six, each team managing one of four scenarios. Each person participated in two or three scenarios with differently constructed teams. Participants and nine external raters rated the teams' performances using a Human Factors Rating Scale (HFRS) and a Global Rating Scale (GRS). Interrater reliability was determined using intraclass correlations and the Cronbach alpha. Analyses of variance were used to determine the reliability of the two measures, and effects of both scenario and rater profession (R.N. vs. M.D.) on scores. Pearson product-moment correlations were used to compare external with self-generated assessments. Results The average of nine external rater scores showed good reliability for both HFRS and GRS; however, the intraclass correlation coefficients for a single rater was low. There was some effect of rater profession on self-generated HFRS but not on GRS. An analysis of profession-specific subscores on the HFRS revealed no interaction between profession of rater and profession being rated. There was low correlation between externally and self-generated team assessments. Conclusions This study does not support the use of the HFRS for assessment of obstetric teams. The GRS shows promise as a summative but not a formative assessment tool. It is necessary to develop a domain specific behavioral marking system for obstetric teams.


2019 ◽  
Vol 07 (05) ◽  
pp. E678-E684 ◽  
Author(s):  
Michael Scaffidi ◽  
Catharine Walsh ◽  
Rishad Khan ◽  
Colleen Parker ◽  
Ahmed Al-Mazroui ◽  
...  

Abstract Background and study aims Novice endoscopists are inaccurate in self-assessment of procedures. One means of improving self-assessment accuracy is through video-based feedback. We aimed to determine the comparative effectiveness of three video-based interventions on novice endoscopists’ self-assessment accuracy of endoscopic competence. Materials and methods Novice endoscopists (performed < 20 previous procedures) were recruited. Participants completed a simulated esophagogastroduodenoscopy (EGD) on a virtual reality simulator. They were then randomized to one of three groups: self-video review (SVR), which involved watching a recorded video of their own performance; benchmark review (BVR), which involved watching a video of a simulated EGD completed by an expert; and self- and benchmark video (SBVR), which involved both videos. Participants then completed two additional simulated EGD cases. Self-assessments were conducted immediately after the first procedure, after the video intervention and after the additional two procedures. External assessments were conducted by two experienced endoscopists, who were blinded to participant identity and group assignment through video recordings. External and self-assessments were completed using the global rating scale component of the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT GRS). Results Fifty-one participants completed the study. The BVR group had significantly improved self-assessment accuracy in the short-term, compared to the SBVR group (P = .005). The SBVR group demonstrated significantly improved self-assessment accuracy over time (P = .016). There were no significant effects of group or of time for the SVR group. Conclusions Video-based interventions, particularly combined use of self- and benchmark video review, can improve accuracy of self-assessment of endoscopic competence among novices.


CJEM ◽  
2016 ◽  
Vol 18 (6) ◽  
pp. 405-413 ◽  
Author(s):  
Robert McGraw ◽  
Tim Chaplin ◽  
Conor McKaigney ◽  
Louise Rang ◽  
Melanie Jaeger ◽  
...  

AbstractObjectiveTo develop a simulation-based curriculum for residents to learn ultrasound-guided (USG) central venous catheter (CVC) insertion, and to study the volume and type of practice that leads to technical proficiency.MethodsTen post-graduate year two residents from the Departments of Emergency Medicine and Anesthesiology completed four training sessions of two hours each, at two week intervals, where they engaged in a structured program of deliberate practice of the fundamental skills of USG CVC insertion on a simulator. Progress during training was monitored using regular hand motion analysis (HMA) and performance benchmarks were determined by HMA of local experts. Blinded assessment of video recordings was done at the end of training to assess technical competence using a global rating scale.ResultsNone of the residents met any of the expert benchmarks at baseline. Over the course of training, the HMA metrics of the residents revealed steady and significant improvement in technical proficiency. By the end of the fourth session six of 10 residents had faster procedure times than the mean expert benchmark, and nine of 10 residents had more efficient left and right hand motions than the mean expert benchmarks. Nine residents achieved mean GRS scores rating them competent to perform independently.ConclusionWe successfully developed a simulation-based curriculum for residents learning the skills of USG CVC insertion. Our results suggest that engaging residents in three to four distributed sessions of deliberate practice of the fundamental skills of USG CVC insertion leads to steady and marked improvement in technical proficiency with individuals approaching or exceeding expert level benchmarks.


Author(s):  
B Santyr ◽  
M Abbass ◽  
A Chalil ◽  
D Krivosheya ◽  
LM Denning ◽  
...  

Background: Microsurgical techniques remain a cornerstone of neurosurgical training. Despite this, neurosurgical microvascular case volumes are decreasing as endovascular and minimally invasive options expand. As such, educators are looking towards simulation to supplement operative exposure. We review a single institution’s experience with a comprehensive, longitudinal microsurgical simulation training program, and evaluate its effectiveness. Methods: Consecutive postgraduate year 2 (PGY-2) neurosurgery residents completed a one-year curriculum spanning 17 training sessions divided into 5 modules of increasing fidelity. Both perfused duck wing and live rat femoral vessel training modules were used. Trainee performance was video recorded and blindly graded using the Objective Structured Assessment of Technical Skills Global Rating Scale. Results: Eighteen participants completed 107 microvascular anastomoses during the study. There was significant improvement in six measurable skills during the curriculum. Mean overall score was significantly higher on the fifth attempt compared to the first attempt for all 3 live anastomotic modules (p<0.001). Each module had a different improvement profile across the skills assessed. The greatest improvement was observed during artery-to-artery anastomosis. Conclusions: This high-fidelity microsurgical simulation curriculum demonstrated a significant improvement in the six microneurosurgical skills assessed, supporting its use as an effective teaching model. Transferability to the operative environment is actively being investigated.


Sign in / Sign up

Export Citation Format

Share Document