script concordance test
Recently Published Documents


TOTAL DOCUMENTS

100
(FIVE YEARS 14)

H-INDEX

20
(FIVE YEARS 1)

Author(s):  
Jennie Brentnall ◽  
Debbie Thackray ◽  
Belinda Judd

(1) Background: Clinical reasoning is essential to the effective practice of autonomous health professionals and is, therefore, an essential capability to develop as students. This review aimed to systematically identify the tools available to health professional educators to evaluate students’ attainment of clinical reasoning capabilities in clinical placement and simulation settings. (2) Methods: A systemic review of seven databases was undertaken. Peer-reviewed, English-language publications reporting studies that developed or tested relevant tools were included. Searches included multiple terms related to clinical reasoning and health disciplines. Data regarding each tool’s conceptual basis and evaluated constructs were systematically extracted and analysed. (3) Results: Most of the 61 included papers evaluated students in medical and nursing disciplines, and over half reported on the Script Concordance Test or Lasater Clinical Judgement Rubric. A number of conceptual frameworks were referenced, though many papers did not reference any framework. (4) Conclusions: Overall, key outcomes highlighted an emphasis on diagnostic reasoning, as opposed to management reasoning. Tools were predominantly aligned with individual health disciplines and with limited cross-referencing within the field. Future research into clinical reasoning evaluation tools should build on and refer to existing approaches and consider contributions across professional disciplinary divides.


2021 ◽  
Vol 8 ◽  
Author(s):  
Osamu Nomura ◽  
Taichi Itoh ◽  
Takaaki Mori ◽  
Takateru Ihara ◽  
Satoshi Tsuji ◽  
...  

Introduction: Clinical reasoning is a crucial skill in the practice of pediatric emergency medicine and a vital element of the various competencies achieved during the clinical training of resident doctors. Pediatric emergency physicians are often required to stabilize patients and make correct diagnoses with limited clinical information, time and resources. The Pediatric Emergency Medicine Script Concordance Test (PEM-SCT) has been developed specifically for assessing physician's reasoning skills in the context of the uncertainties in pediatric emergency practice. In this study, we developed the Japanese version of the PEM-SCT (Jpem-SCT) and confirmed its validity by collecting relevant evidence.Methods: The Jpem-SCT was developed by translating the PEM-SCT into Japanese using the Translation, Review, Adjudication, Pretest, Documentation team translation model, which follows cross-cultural survey guidelines for proper translation and cross-cultural and linguistic equivalences between the English and Japanese version of the survey. First, 15 experienced pediatricians participated in the pre-test session, serving as a reference panel for modifying the test descriptions, incorporating Japanese context, and establishing the basis for the scoring process. Then, a 1-h test containing 60 questions was administered to 75 trainees from three academic institutions. Following data collection, we calculated the item-total correlations of the scores to optimize selection of the best items in the final version of the Jpem-SCT. The reliability of the finalized Jpem-SCT was calculated using Cronbach's α coefficient for ensuring generalizability of the evidence. We also conducted multiple regression analysis of the test score to collect evidence on validity of the extrapolation.Results: The final version of the test, based on item-total correlation data analysis, contained 45 questions. The participant's specialties were as follows: Transitional interns 12.0%, pediatric residents 56.0%, emergency medicine residents 25.3%, and PEM fellows 6.7%. The mean score of the final version of the Jpem-SCT was 68.6 (SD 9.8). The reliability of the optimized test (Cronbach's α) was 0.70. Multiple regression analysis showed that being a transitional intern was a negative predictor of test scores, indicating that clinical experience relates to performance on the Jpem-SCT.Conclusion: This pediatric emergency medicine Script Concordance Test was reliable and valid for assessing the development of clinical reasoning by trainee doctors during residency training.


Author(s):  
Jordan D. Tayce ◽  
Ashley B. Saunders

The development of clinical reasoning skills is a high priority during clinical service, but an unpredictable case load and limited time for formal instruction makes it challenging for faculty to foster and assess students’ individual clinical reasoning skills. We developed an assessment for learning activity that helps students build their clinical reasoning skills based on a modified version of the script concordance test (SCT). To modify the standard SCT, we simplified it by limiting students to a 3-point Likert scale instead of a 5-point scale and added a free-text box for students to provide justification for their answer. Students completed the modified SCT during clinical rounds to prompt a group discussion with the instructor. Student feedback was positive, and the instructor gained valuable insight into the students’ thought process. A modified SCT can be adopted as part of a multimodal approach to teaching on the clinic floor. The purpose of this article is to describe our modifications to the standard SCT and findings from implementation in a clinical rounds setting as a method of formative assessment for learning and developing clinical reasoning skills.


2021 ◽  
Author(s):  
Enjy Abouzeid ◽  
Moataz Sallam

Abstract Introduction: Although clinical competence is multi-dimensional and should be acquired by each medical student, but most students learn clinical reasoning skills informally in clinical rotations. Accordingly, A prospective quasi-experimental study was conducted aiming to evaluate the merging of Script Concordance Test (SCT) and Team Based Learning (TBL) as a teaching/learning approach in clinical setting for medical students. Methodology: The study ran in three phases. Phase 1 (preparatory phase) involved students’ preparation and preparation of SCT. Phase 2 (implementation phase) included application of individual and team SCT (iSCT and tSCT respectively). Phase 3 (evaluation phase) compared score results and obtained students’ feedback.Results: Significant differences existed when comparing individual students’ response or students’ teams’ responses with experts scores. However, the use of the SCT/TBL approach had improved the clinical reasoning skills of the students in some vignettes and helped the lower achievers through the tSCT. The students found the approach appropriate for teaching or formatively assessing clinical reasoning. It helped them to discuss, correct their mistakes and improve their problem solving and reasoning skills. Conclusion: team-based learning improved students’ responses, especially the lower achievers, to script concordance test. SCT/TBL approach can be used to teach clinical reasoning for undergraduate students.


Author(s):  
Ana María De Santiago Nocito ◽  
Alberto García Lledó

El script concordance test es un instrumento de aprendizaje y de evaluación de las competencias inherentes al razonamiento clínico y a la toma de decisions. . La adquisición de experiencia clínica supone la elaboración de redes de conocimiento que se incorporan a la tarea clínica habitual. Estas redes, conocidas como scripts, se organizan para alcanzar metas relativas al diagnóstico, al establecimiento de estrategias de investigación y a opciones de tratamiento. El presente artículo muestra la construcción de este tipo de preguntas y su sistema de evaluación.


2020 ◽  
Author(s):  
Nada Gawad ◽  
Timothy J. Wood ◽  
Lindsay Cowley ◽  
Isabelle Raiche

2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Olivier Peyrony ◽  
Alice Hutin ◽  
Jennifer Truchot ◽  
Raphaël Borie ◽  
David Calvet ◽  
...  

Abstract Background The evaluation process of French medical students will evolve in the next few years in order to improve assessment validity. Script concordance testing (SCT) offers the possibility to assess medical knowledge alongside clinical reasoning under conditions of uncertainty. In this study, we aimed at comparing the SCT scores of a large cohort of undergraduate medical students, according to the experience level of the reference panel. Methods In 2019, the authors developed a 30-item SCT and sent it to experts with varying levels of experience. Data analysis included score comparisons with paired Wilcoxon rank sum tests and concordance analysis with Bland & Altman plots. Results A panel of 75 experts was divided into three groups: 31 residents, 21 non-experienced physicians (NEP) and 23 experienced physicians (EP). Among each group, random samples of N = 20, 15 and 10 were selected. A total of 985 students from nine different medical schools participated in the SCT examination. No matter the size of the panel (N = 20, 15 or 10), students’ SCT scores were lower with the NEP group when compared to the resident panel (median score 67.1 vs 69.1, p < 0.0001 if N = 20; 67.2 vs 70.1, p < 0.0001 if N = 15 and 67.7 vs 68.4, p < 0.0001 if N = 10) and with EP compared to NEP (65.4 vs 67.1, p < 0.0001 if N = 20; 66.0 vs 67.2, p < 0.0001 if N = 15 and 62.5 vs 67.7, p < 0.0001 if N = 10). Bland & Altman plots showed good concordances between students’ SCT scores, whatever the experience level of the expert panel. Conclusions Even though student SCT scores differed statistically according to the expert panels, these differences were rather weak. These results open the possibility of including less-experienced experts in panels for the evaluation of medical students.


2020 ◽  
Vol 21 (4) ◽  
Author(s):  
Eric Steinberg ◽  
Ethan Cowan ◽  
Michelle Lin ◽  
Anthony Sielicki ◽  
Steven Warrington

Sign in / Sign up

Export Citation Format

Share Document