Examining validity evidence for a simulation-based assessment tool for basic robotic surgical skills

2018 ◽  
Vol 13 (1) ◽  
pp. 99-106 ◽  
Author(s):  
Maria Cecilie Havemann ◽  
Torur Dalsgaard ◽  
Jette Led Sørensen ◽  
Kristin Røssaak ◽  
Steffen Brisling ◽  
...  
2016 ◽  
Vol 2 (3) ◽  
pp. 61-67 ◽  
Author(s):  
Jane Runnacles ◽  
Libby Thomas ◽  
James Korndorffer ◽  
Sonal Arora ◽  
Nick Sevdalis

IntroductionDebriefing is essential to maximise the simulation-based learning experience, but until recently, there was little guidance on an effective paediatric debriefing. A debriefing assessment tool, Objective Structured Assessment of Debriefing (OSAD), has been developed to measure the quality of feedback in paediatric simulation debriefings. This study gathers and evaluates the validity evidence of OSAD with reference to the contemporary hypothesis-driven approach to validity.MethodsExpert input on the paediatric OSAD tool from 10 paediatric simulation facilitators provided validity evidence based on content and feasibility (phase 1). Evidence for internal structure validity was sought by examining reliability of scores from video ratings of 35 postsimulation debriefings; and evidence for validity based on relationship to other variables was sought by comparing results with trainee ratings of the same debriefings (phase 2).ResultsSimulation experts’ scores were significantly positive regarding the content of OSAD and its instructions. OSAD's feasibility was demonstrated with positive comments regarding clarity and application. Inter-rater reliability was demonstrated with intraclass correlations above 0.45 for 6 of the 7 dimensions of OSAD. The internal consistency of OSAD (Cronbach α) was 0.78. Pearson correlation of trainee total score with OSAD total score was 0.82 (p<0.001) demonstrating validity evidence based on relationships to other variables.ConclusionThe paediatric OSAD tool provides a structured approach to debriefing, which is evidence-based, has multiple sources of validity evidence and is relevant to end-users. OSAD may be used to improve the quality of debriefing after paediatric simulations.


2019 ◽  
Vol 11 (2) ◽  
pp. 168-176
Author(s):  
Zia Bismilla ◽  
Tehnaz Boyle ◽  
Karen Mangold ◽  
Wendy Van Ittersum ◽  
Marjorie Lee White ◽  
...  

ABSTRACT Background  The Accreditation Council for Graduate Medical Education (ACGME) Milestone projects required each specialty to identify essential skills and develop means of assessment with supporting validity evidence for trainees. Several specialties rate trainees on a milestone subcompetency related to working in interprofessional teams. A tool to assess trainee competence in any role on an interprofessional team in a variety of scenarios would be valuable and suitable for simulation-based assessment. Objective  We developed a tool for simulation settings that assesses interprofessional teamwork in trainees. Methods  In 2015, existing tools that assess teamwork or interprofessionalism using direct observation were systematically reviewed for appropriateness, generalizability, adaptability, ease of use, and resources required. Items from these tools were included in a Delphi method with multidisciplinary pediatrics experts using an iterative process from June 2016 to January 2017 to develop an assessment tool. Results  Thirty-one unique tools were identified. A 2-stage review narrowed this list to 5 tools, and 81 items were extracted. Twenty-two pediatrics experts participated in 4 rounds of Delphi surveys, with response rates ranging from 82% to 100%. Sixteen items reached consensus for inclusion in the final tool. A global 4-point rating scale from novice to proficient was developed. Conclusions  A novel tool to assess interprofessional teamwork for individual trainees in a simulated setting was developed using a systematic review and Delphi methodology. This is the first step to establish the validity evidence necessary to use this tool for competency-based assessment.


2016 ◽  
Vol 13 (1) ◽  
pp. 60-68 ◽  
Author(s):  
Gerben E. Breimer ◽  
Faizal A. Haji ◽  
Giuseppe Cinalli ◽  
Eelco W. Hoving ◽  
James M. Drake

Abstract BACKGROUND: Growing demand for transparent and standardized methods for evaluating surgical competence prompted the construction of the Neuro-Endoscopic Ventriculostomy Assessment Tool (NEVAT). OBJECTIVE: To provide validity evidence of the NEVAT by reporting on the tool's internal structure and its relationship with surgical expertise during simulation-based training. METHODS: The NEVAT was used to assess performance of trainees and faculty at an international neuroendoscopy workshop. All participants performed an endoscopic third ventriculostomy (ETV) on a synthetic simulator. Participants were simultaneously scored by 2 raters using the NEVAT procedural checklist and global rating scale (GRS). Evidence of internal structure was collected by calculating interrater reliability and internal consistency of raters' scores. Evidence of relationships with other variables was collected by comparing the ETV performance of experts, experienced trainees, and novices using Jonckheere's test (evidence of construct validity). RESULTS: Thirteen experts, 11 experienced trainees, and 10 novices participated. The interrater reliability by the intraclass correlation coefficient for the checklist and GRS was 0.82 and 0.94, respectively. Internal consistency (Cronbach's α) for the checklist and the GRS was 0.74 and 0.97, respectively. Median scores with interquartile range on the checklist and GRS for novices, experienced trainees, and experts were 0.69 (0.58-0.86), 0.85 (0.63-0.89), and 0.85 (0.81-0.91) and 3.1 (2.5-3.8), 3.7 (2.2-4.3) and 4.6 (4.4-4.9), respectively. Jonckheere's test showed that the median checklist and GRS score increased with performer expertise (P = .04 and .002, respectively). CONCLUSION: This study provides validity evidence for the NEVAT to support its use as a standardized method of evaluating neuroendoscopic competence during simulation-based training.


2018 ◽  
Vol 26 (7) ◽  
pp. e156-e157
Author(s):  
Jan Duedal Rölfing ◽  
Søren Kold ◽  
Donald D. Anderson ◽  
Matthew Douglas Putnam ◽  
Julie Adams ◽  
...  

Author(s):  
G Shingler ◽  
J Ansell ◽  
S Goddard ◽  
N Warren ◽  
J Torkington

The evidence for using surgical simulators in training and assessment is growing rapidly. A systematic review has demonstrated the validity of different simulators for a range of procedures. Research suggests that skills developed on simulators can be transferred to the operating theatre. The increased interest in simulation comes as a result of the need to streamline surgical training. This is reflected by the numerous simulation-based courses that have become an essential part of modern surgical training.


2020 ◽  
Vol 34 (09) ◽  
pp. 13525-13528
Author(s):  
Judy Goldsmith ◽  
Emanuelle Burton ◽  
David M. Dueber ◽  
Beth Goldstein ◽  
Shannon Sampson ◽  
...  

As is evidenced by the associated AI, Ethics and Society conference, we now take as given the need for ethics education in the AI and general CS curricula. The anticipated surge in AI ethics education will force the field to reckon with delineating and then evaluating learner outcomes to determine what is working and improve what is not. We argue for a more descriptive than normative focus of this ethics education, and propose the development of assessments that can measure descriptive ethical thinking about AI. Such an assessment tool for measuring ethical reasoning capacity in CS contexts must be designed to produce reliable scores for which there is established validity evidence concerning their interpretation and use.


2021 ◽  
Vol 36 (1) ◽  
Author(s):  
Btissam El Hassar ◽  
Cheryl Poth ◽  
Rebecca Gokiert ◽  
Okan Bulut

Organizations are required to evaluate their programs for both learning and accountability purposes, which has increased the need to build their internal evaluation capacity. A remaining challenge is access to tools that lead to valid evidence supporting internal capacity development. The authors share practical insights from the development and use of the Evaluation Capacity Needs Assessment tool and framework and implications for using its data to make concrete decisions within Canadian contexts. The article refers to validity evidence generated from factor analyses and structural equation modelling and describes how applying the framework can be used to identify individual and organizational evaluation capac­ity strengths and gaps, concluding with practice considerations and future directions for this work.  


2020 ◽  
Vol 134 (5) ◽  
pp. 415-418 ◽  
Author(s):  
R Bannon ◽  
K E Stewart ◽  
M Bannister

AbstractObjectivesThis study aimed to assess the published literature on non-technical skills in otolaryngology surgery and examine the applicability of any research to others’ practice, and to explore how the published literature can identify areas for further development and guide future research.MethodsA systematic review was conducted using the following key words: ‘otolaryngology’, ‘otorhinolaryngology’, ‘ENT’, ‘ENT surgery’, ‘ear, nose and throat surgery’, ‘head and neck surgery’, ‘thyroid surgery’, ‘parathyroid surgery’, ‘otology’, ‘rhinology’, ‘laryngology’ ‘skull base surgery’, ‘airway surgery’, ‘non-technical skills’, ‘non technical skills for surgeons’, ‘NOTSS’, ‘behavioural markers’ and ‘behavioural assessment tool’.ResultsThree publications were included in the review – 1 randomised, controlled trial and 2 cohort studies – involving 78 participants. All were simulation-based studies involving training otolaryngology surgeons.ConclusionLittle research has been undertaken on non-technical skills in otolaryngology. Training surgeons’ non-technical skill levels are similar across every tested aspect. The research already performed can guide further studies, particularly amongst non-training otolaryngology surgeons and in both emergency and elective non-simulated environments.


CJEM ◽  
2019 ◽  
Vol 21 (S1) ◽  
pp. S23
Author(s):  
N. Kester-Greene ◽  
A. Hall ◽  
C. Walsh

Introduction: There is increasing evidence to support the integration of simulation into medical training; however, no national emergency medicine (EM) simulation curriculum currently exists. Using Delphi methodology, we aimed to identify and establish content validity evidence for EM curricular content best suited for simulation-based training to inform national postgraduate EM training. Methods: A national panel of experts in EM simulation-related education iteratively rated potential curricular topics, on a 4-point scale, to determine those best suited for simulation-based training. After each round, responses were analyzed and topics scoring &lt;2/4 were removed. Remaining topics were resent to the panel for further ratings until consensus was achieved, defined as Cronbach α ≥ 0.95. At conclusion of the Delphi process, topics that were rated ≥3.5/4 were considered core curricular topics, while those rated 3.0-3.5 were considered extended curricular topics. Results: Forty-four experts from 13 Canadian centres participated. Two hundred and eighty potential curricular topics, in 29 domains, were generated from a systematic review of the literature, analysis of relevant educational documents and a survey of Delphi panelists. Three rounds of Delphi surveys were completed before consensus was achieved, with response rates ranging from 93-100%. Twenty-eight topics, in 8 domains, reached consensus as core curricular topics. An additional 35 topics, in 14 domains, reached consensus as extended curricular topics. Conclusion: Delphi methodology allowed for achievement of expert consensus and content validation of EM curricular content best suited for simulation-based training. These results provide a foundation for improved integration of simulation into postgraduate EM training and can be used to inform a national simulation curriculum to supplement clinical training and optimize learning.


Sign in / Sign up

Export Citation Format

Share Document