Establishing Validation Methods: Measuring Progress (Measuring Teaching Effectiveness) – Global Rating Scales

Author(s):  
Moises Cohen ◽  
H. Kaya Akan ◽  
Lucio Ernlund ◽  
Oğuz Poyanlı
1987 ◽  
Vol 21 (6) ◽  
pp. 477-481 ◽  
Author(s):  
A. KEYNAN ◽  
M. FRIEDMAN ◽  
J. BENBASSAT

CJEM ◽  
2019 ◽  
Vol 21 (S1) ◽  
pp. S76
Author(s):  
R. Dunfield ◽  
J. Riley ◽  
C. Vaillancourt ◽  
J. Fraser ◽  
J. Woodland ◽  
...  

Introduction: Improving public access and training for epinephrine auto-injectors (EAIs) can reduce time to initial treatment in anaphylaxis. Effective use of EAIs by the public requires bystanders to respond in a timely and proficient manner. We wished to examine optimal methods for assessing effective training and skill retention for public use of EAIs, including the use of microskills lists. Methods: In this prospective, stratified randomized study, 154 participants at 15 sites receiving installation of public EAIs were randomized to one of three experimental education interventions: A) didactic poster (POS) teaching; B) poster with video teaching (VID), and C) Poster, video, and simulation training (SIM). Participants were tested by participation in a standardized simulated anaphylaxis scenario at 0-months, immediately following training, and again at follow-up at 3 months. Participants’ responses were videoed and assessed by two blinded raters using microksills checklists. The microskills lists were derived from the best available evidence and interprofessional process mapping using a skills trainer. The interobserver reliability was assessed for each item in a 14 step microskill checklist composed of 3-point and 5-point Likert scale questions around EpiPen use, expressed as Kappa Values. Results: Overall there was poor agreement between the two raters. Being composed or panicked had the highest level of agreement K = 0.7, but a result that did not reach statistical significance (substantial agreement, p = 0.06) calling for EMS support has the second highest level of agreement, K = 0.6 (moderate agreement, p = 0.01), the remainder of the items had very low to moderate agreement with a Kappa value range of -103 to 0.48. Conclusion: Although microskills chesklists have been shown to identify areas where learners and interprofessional teams require deliberate practice, these results support previously published evidence that the use of microskills checklists to assess skills has poor reproducibility. Performance will be further assessed in this study using global rating scales, which have shown higher levels of agreement in other studies.


2020 ◽  
Vol Volume 12 ◽  
pp. 35-42
Author(s):  
Donna Mendez ◽  
Katrin Takenaka ◽  
Marylou Cardenas-Turanzas ◽  
Guillermo Suarez
Keyword(s):  

2019 ◽  
Vol 11 (4) ◽  
pp. 422-429
Author(s):  
Jason A. Lord ◽  
Danny J. Zuege ◽  
Maria Palacios Mackay ◽  
Amanda Roze des Ordons ◽  
Jocelyn Lockyer

ABSTRACT Background Determining procedural competence requires psychometrically sound assessment tools. A variety of instruments are available to determine procedural performance for central venous catheter (CVC) insertion, but it is not clear which ones should be used in the context of competency-based medical education. Objective We compared several commonly used instruments to determine which should be preferentially used to assess competence in CVC insertion. Methods Junior residents completing their first intensive care unit rotation between July 31, 2006, and March 9, 2007, were video-recorded performing CVC insertion on task trainer mannequins. Between June 1, 2016, and September 30, 2016, 3 experienced raters judged procedural competence on the historical video recordings of resident performance using 4 separate tools, including an itemized checklist, Objective Structured Assessment of Technical Skills (OSATS), a critical error assessment tool, and the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE). Generalizability theory (G-theory) was used to compare the performance characteristics among the tools. A decision study predicted the optimal testing environment using the tools. Results At the time of the original recording, 127 residents rotated through intensive care units at the University of Calgary, Alberta, Canada. Seventy-seven of them (61%) met inclusion criteria, and 55 of those residents (71%) agreed to participate. Results from the generalizability study (G-study) demonstrated that scores from O-SCORE and OSATS were the most dependable. Dependability could be maintained for O-SCORE and OSATS with 2 raters. Conclusions Our results suggest that global rating scales, such as the OSATS or the O-SCORE tools, should be preferentially utilized for assessment of competence in CVC insertion.


CJEM ◽  
2018 ◽  
Vol 20 (S1) ◽  
pp. S20-S20
Author(s):  
C. Patocka ◽  
A. Cheng ◽  
M. Sibbald ◽  
J. Duff ◽  
A. Lai ◽  
...  

Introduction: Survival from cardiac arrest has been linked to the quality of resuscitation care. Unfortunately, healthcare providers frequently underperform in these critical scenarios, with a well-documented deterioration in skills weeks to months following advanced life support courses. Improving initial training and preventing decay in knowledge and skills are a priority in resuscitation education. The spacing effect has repeatedly been shown to have an impact on learning and retention. Despite its potential advantages, the spacing effect has seldom been applied to organized education training or complex motor skill learning where it has the potential to make a significant impact. The purpose of this study was to determine if a resuscitation course taught in a spaced format compared to the usual massed instruction results in improved retention of procedural skills. Methods: EMS providers (Paramedics and Emergency Medical Technicians (EMT)) were block randomized to receive a Pediatric Advanced Life Support (PALS) course in either a spaced format (four 210-minute weekly sessions) or a massed format (two sequential 7-hour days). Blinded observers used expert-developed 4-point global rating scales to assess video recordings of each learner performing various resuscitation skills before, after and 3-months following course completion. Primary outcomes were performance on infant bag-valve-mask ventilation (BVMV), intraosseous (IO) insertion, infant intubation, infant and adult chest compressions. Results: Forty-eight of 50 participants completed the study protocol (26 spaced and 22 massed). There was no significant difference between the two groups on testing before and immediately after the course. 3-months following course completion participants in the spaced cohort scored higher overall for BVMV (2.2 ± 0.13 versus 1.8 ± 0.14, p=0.012) without statistically significant difference in scores for IO insertion (3.0 ± 0.13 versus 2.7± 0.13, p= 0.052), intubation (2.7± 0.13 versus 2.5 ± 0.14, p=0.249), infant compressions (2.5± 0.28 versus 2.5± 0.31, p=0.831) and adult compressions (2.3± 0.24 versus 2.2± 0.26, p=0.728) Conclusion: Procedural skills taught in a spaced format result in at least as good learning as the traditional massed format; more complex skills taught in a spaced format may result in better long term retention when compared to traditional massed training as there was a clear difference in BVMV and trend toward a difference in IO insertion.


2014 ◽  
Vol 40 (5) ◽  
pp. 629 ◽  
Author(s):  
Daniel Leff ◽  
George Petrou ◽  
Stella Mavroveli ◽  
Daniel Cocker ◽  
Monika Bersihand ◽  
...  

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Louise Inkeri Hennings ◽  
Jette Led Sørensen ◽  
Jane Hybscmann ◽  
Jeanett Strandbygaard

Abstract Background Standardised assessment is key to structured surgical training. Currently, there is no consensus on which surgical assessment tool to use in live gynaecologic surgery. The purpose of this review is to identify assessment tools measuring technical skills in gynaecologic surgery and evaluate the measurement characteristics of each tool. Method We utilized the scoping review methodology and searched PubMed, Medline, Embase and Cochrane. Inclusion criteria were studies that analysed assessment tools in live gynaecologic surgery. Kane’s validity argument was applied to evaluate the assessment tools in the included studies. Results Eight studies out of the 544 identified fulfilled the inclusion criteria. The assessment tools were categorised as global rating scales, global and procedure rating scales combined, procedure-specific rating scales or as a non-procedure-specific error assessment tool. Conclusion This scoping review presents the current different tools for observational assessment of technical skills in intraoperative, gynaecologic surgery. This scoping review can serve as a guide for surgical educators who want to apply a scale or a specific tool in surgical assessment.


2020 ◽  
Vol 21 (3) ◽  
pp. 299-313
Author(s):  
Belinda Goodenough ◽  
Jacqueline Watts ◽  
Sarah Bartlett ◽  

AbstractObjectives:To satisfy requirements for continuing professional education, workforce demand for access to large-scale continuous professional education and micro-credential-style online courses is increasing. This study examined the Knowledge Translation (KT) outcomes for a short (2 h) online course about support at night for people living with dementia (Bedtime to Breakfast), delivered at a national scale by the Dementia Training Australia (DTA).Methods:A sample of the first cohort of course completers was re-contacted after 3 months to complete a KT follow-up feedback survey (n = 161). In addition to potential practice impacts in three domains (Conceptual, Instrumental, Persuasive), respondents rated the level of Perceived Improvement in Quality of Care (PIQOC), using a positively packed global rating scale.Results:Overall, 93.8% of the respondents agreed that the course had made a difference to the support they had provided for people with dementia since the completion of the course. In addition to anticipated Conceptual impacts (e.g., change in knowledge), a range of Instrumental and Persuasive impacts were also reported, including workplace guidelines development and knowledge transfer to other staff. Tally counts for discrete KT outcomes were high (median 7/10) and explained 23% of the variance in PIQOC ratings.Conclusions:Online short courses delivered at a national scale are capable of supporting a range of translation-to-practice impacts, within the constraints of retrospective insight into personal practice change. Topics around self-assessed knowledge-to-practice and the value of positively packed rating scales for increasing variance in respondent feedback are discussed.


Sign in / Sign up

Export Citation Format

Share Document