scholarly journals Impact of panelists’ experience on script concordance test scores of medical students

2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Olivier Peyrony ◽  
Alice Hutin ◽  
Jennifer Truchot ◽  
Raphaël Borie ◽  
David Calvet ◽  
...  

Abstract Background The evaluation process of French medical students will evolve in the next few years in order to improve assessment validity. Script concordance testing (SCT) offers the possibility to assess medical knowledge alongside clinical reasoning under conditions of uncertainty. In this study, we aimed at comparing the SCT scores of a large cohort of undergraduate medical students, according to the experience level of the reference panel. Methods In 2019, the authors developed a 30-item SCT and sent it to experts with varying levels of experience. Data analysis included score comparisons with paired Wilcoxon rank sum tests and concordance analysis with Bland & Altman plots. Results A panel of 75 experts was divided into three groups: 31 residents, 21 non-experienced physicians (NEP) and 23 experienced physicians (EP). Among each group, random samples of N = 20, 15 and 10 were selected. A total of 985 students from nine different medical schools participated in the SCT examination. No matter the size of the panel (N = 20, 15 or 10), students’ SCT scores were lower with the NEP group when compared to the resident panel (median score 67.1 vs 69.1, p < 0.0001 if N = 20; 67.2 vs 70.1, p < 0.0001 if N = 15 and 67.7 vs 68.4, p < 0.0001 if N = 10) and with EP compared to NEP (65.4 vs 67.1, p < 0.0001 if N = 20; 66.0 vs 67.2, p < 0.0001 if N = 15 and 62.5 vs 67.7, p < 0.0001 if N = 10). Bland & Altman plots showed good concordances between students’ SCT scores, whatever the experience level of the expert panel. Conclusions Even though student SCT scores differed statistically according to the expert panels, these differences were rather weak. These results open the possibility of including less-experienced experts in panels for the evaluation of medical students.

2011 ◽  
Vol 33 (6) ◽  
pp. 472-477 ◽  
Author(s):  
Aloysius J. Humbert ◽  
Mary T. Johnson ◽  
Edward Miech ◽  
Fred Friedberg ◽  
Janice A. Grackin ◽  
...  

2013 ◽  
Vol 13 (1) ◽  
Author(s):  
Sylvain Mathieu ◽  
Marion Couderc ◽  
Baptiste Glace ◽  
Anne Tournadre ◽  
Sandrine Malochet-Guinamand ◽  
...  

Author(s):  
Stuart Lubarsky ◽  
Colin Chalk ◽  
Driss Kazitani ◽  
Robert Gagnon ◽  
Bernard Charlin

Background:Clinical judgment, the ability to make appropriate decisions in uncertain situations, is central to neurological practice, but objective measures of clinical judgment in neurology trainees are lacking. The Script Concordance Test (SCT), based on script theory from cognitive psychology, uses authentic clinical scenarios to compare a trainee’s judgment skills with those of experts. The SCT has been validated in several medical disciplines, but has not been investigated in neurology.Methods:We developed an Internet-based neurology SCT (NSCT) comprising 24 clinical scenarios with three to four questions each. The scenarios were designed to reflect the uncertainty of real-life clinical encounters in adult neurology. The questions explored aspects of the scenario in which several responses might be acceptable; trainees were asked to judge which response they considered to be best. Forty-one PGY1-PGY5 neurology residents and eight medical students from three North American neurology programs (McGill, Calgary, and Mayo Clinic) completed the NSCT. The responses of trainees to each question were compared with the aggregate responses of an expert panel of 16 attending neurologists.Results:The NSCT demonstrated good reliability (Cronbach alpha = 0.79). Neurology residents scored higher than medical students and lower than attending neurologists, supporting the test’s construct validity. Furthermore, NSCT scores discriminated between senior (PGY3-5) and junior residents (PGY1-2).Conclusions:Our NSCT is a practical and reliable instrument, and our findings support its construct validity for assessing judgment in neurology trainees. The NSCT has potentially widespread applications as an evaluation tool, both in neurology training and for licensing examinations.


Author(s):  
Jordan D. Tayce ◽  
Ashley B. Saunders

The development of clinical reasoning skills is a high priority during clinical service, but an unpredictable case load and limited time for formal instruction makes it challenging for faculty to foster and assess students’ individual clinical reasoning skills. We developed an assessment for learning activity that helps students build their clinical reasoning skills based on a modified version of the script concordance test (SCT). To modify the standard SCT, we simplified it by limiting students to a 3-point Likert scale instead of a 5-point scale and added a free-text box for students to provide justification for their answer. Students completed the modified SCT during clinical rounds to prompt a group discussion with the instructor. Student feedback was positive, and the instructor gained valuable insight into the students’ thought process. A modified SCT can be adopted as part of a multimodal approach to teaching on the clinic floor. The purpose of this article is to describe our modifications to the standard SCT and findings from implementation in a clinical rounds setting as a method of formative assessment for learning and developing clinical reasoning skills.


2006 ◽  
Vol 18 (1) ◽  
pp. 22-27 ◽  
Author(s):  
Robert Gagnon ◽  
Bernard Charlin ◽  
Louise Roy ◽  
Monique St-Martin ◽  
Evelyne Sauve ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document