scholarly journals Assessing the acceptability of script concordance testing: a nationwide study in otolaryngology

2021 ◽  
Vol 64 (3) ◽  
pp. E317-E323
Author(s):  
Andrée-Anne Leclerc ◽  
Lily H.P. Nguyen ◽  
Bernard Charlin ◽  
Stuart Lubarsky ◽  
Tareck Ayad

Background: Script concordance testing (SCT) is an objective method to evaluate clinical reasoning that assesses the ability to interpret medical information under conditions of uncertainty. Many studies have supported its validity as a tool to assess higher levels of learning, but little is known about its acceptability to major stakeholders. The aim of this study was to determine the acceptability of SCT to residents in otolaryngology – head and neck surgery (OTL-HNS) and a reference group of experts. Methods: In 2013 and 2016, a set of SCT questions, as well a post-test exit survey, were included in the National In-Training Examination (NITE) for OTL-HNS. This examination is administered to all OTL-HNS residents across Canada who are in the second to fifth year of residency. The same SCT questions and survey were then sent to a group of OTL-HNS surgeons from 4 Canadian universities. Results: For 64.4% of faculty and residents, the study was their first exposure to SCT. Overall, residents found it difficult to adapt to this form of testing, thought that the clinical scenarios were not clear and believed that SCT was not useful for assessing clinical reasoning. In contrast, the vast majority of experts felt that the test questions reflected real-life clinical situations and would recommend SCT as an evaluation method in OTL-HNS. Conclusion: Views about the acceptability of SCT as an assessment tool for clinical reasoning differed between OTL-HNS residents and experts. Education about SCT and increased exposure to this testing method are necessary to improve residents’ perceptions of SCT.

2013 ◽  
Vol 8 (1) ◽  
pp. 76
Author(s):  
Mathew Stone

A Review of: Gardois, P., Calabrese, R., Colombi, N., Lingua, C., Longo, F., Villanacci, M., Miniero, R., & Piga, A. (2011). Effectiveness of bibliographic searches performed by paediatric residents and interns assisted by librarian. A randomised controlled trial. Health Information and Libraries Journal, 28(4), 273-284. doi: 10.1111/j.1471-1842.2011.00957.x Objective – To establish whether the assistance of an experienced biomedical librarian delivers an improvement in the searching of bibliographic databases as performed by medical residents and interns. Design – Randomized controlled trial. Setting – The pediatrics department of a large Italian teaching hospital. Subjects – 18 pediatric residents and interns. Methods – 23 residents and interns from the pediatrics department of a large Italian teaching hospital were invited to participate in this study, of which 18 agreed. Subjects were then randomized into two groups and asked to spend between 30 and 90 minutes searching bibliographic databases for evidence to answer a real-life clinical question which was randomly allocated to them. Each member of the intervention group was provided with an experienced biomedical librarian to provide assistance throughout the search session. The control group received no assistance. The outcome of the search was then measured using an assessment tool adapted for the purpose of this study from the Fresno test of competence in evidence based medicine. This adapted assessment tool rated the “global success” of the search and included criteria such as appropriate question formulation, number of PICO terms translated into search terms, use of Boolean logic, use of subject headings, use of filters, use of limits, and the percentage of citations retrieved that matched a gold standard set of citations found in a prior search by two librarians (who were not involved in assisting the subjects) together with an expert clinician. Main Results – The intervention group scored a median average of 73.6 points out of a possible 100, compared with the control group which scored 50.4. The difference of 23.2 points in favour of the librarian assisted group was a statistically significant result (p value = 0.013) with a 95% confidence interval of between 4.8 and 33.2. Conclusion – This study presents credible evidence that assistance provided by an experienced biomedical librarian improves the quality of the bibliographic database searches performed by residents and interns using real-life clinical scenarios.


2011 ◽  
pp. 24-36 ◽  
Author(s):  
Kimiz Dalkir

This chapter focuses on a method, social network analysis (SNA) that can be used to assess the quantity and quality of connection, communication and collaboration mediated by social tools in an organization. An organization, in the Canadian public sector, is used as a real-life case study to illustrate how SNA can be used in a pre-test/post-test evaluation design to conduct a comparative assessment of methods that can be used before, during and after the implementation of organizational change in work processes. The same evaluation method can be used to assess the impact of introducing new social media such as wikis, expertise locator systems, blogs, Twitter and so on. In other words, while traditional pre-test/post-test designs can be easily applied to social media, the social media tools themselves can be added to the assessment toolkit. Social network analysis in particular is a good candidate to analyze the connections between people and content as well as people with other people.


Author(s):  
Stuart Lubarsky ◽  
Colin Chalk ◽  
Driss Kazitani ◽  
Robert Gagnon ◽  
Bernard Charlin

Background:Clinical judgment, the ability to make appropriate decisions in uncertain situations, is central to neurological practice, but objective measures of clinical judgment in neurology trainees are lacking. The Script Concordance Test (SCT), based on script theory from cognitive psychology, uses authentic clinical scenarios to compare a trainee’s judgment skills with those of experts. The SCT has been validated in several medical disciplines, but has not been investigated in neurology.Methods:We developed an Internet-based neurology SCT (NSCT) comprising 24 clinical scenarios with three to four questions each. The scenarios were designed to reflect the uncertainty of real-life clinical encounters in adult neurology. The questions explored aspects of the scenario in which several responses might be acceptable; trainees were asked to judge which response they considered to be best. Forty-one PGY1-PGY5 neurology residents and eight medical students from three North American neurology programs (McGill, Calgary, and Mayo Clinic) completed the NSCT. The responses of trainees to each question were compared with the aggregate responses of an expert panel of 16 attending neurologists.Results:The NSCT demonstrated good reliability (Cronbach alpha = 0.79). Neurology residents scored higher than medical students and lower than attending neurologists, supporting the test’s construct validity. Furthermore, NSCT scores discriminated between senior (PGY3-5) and junior residents (PGY1-2).Conclusions:Our NSCT is a practical and reliable instrument, and our findings support its construct validity for assessing judgment in neurology trainees. The NSCT has potentially widespread applications as an evaluation tool, both in neurology training and for licensing examinations.


Author(s):  
Takashi Watari ◽  
Yasuharu Tokuda ◽  
Meiko Owada ◽  
Kazumichi Onigata

Virtual Patient Simulations (VPSs) have been cited as a novel learning strategy, but there is little evidence that VPSs yield improvements in clinical reasoning skills and medical knowledge. This study aimed to clarify the effectiveness of VPSs for improving clinical reasoning skills among medical students, and to compare improvements in knowledge or clinical reasoning skills relevant to specific clinical scenarios. We enrolled 210 fourth-year medical students in March 2017 and March 2018 to participate in a real-time pre-post experimental design conducted in a large lecture hall by using a clicker. A VPS program (®Body Interact, Portugal) was implemented for one two-hour class session using the same methodology during both years. A pre–post 20-item multiple-choice questionnaire (10 knowledge and 10 clinical reasoning items) was used to evaluate learning outcomes. A total of 169 students completed the program. Participants showed significant increases in average total post-test scores, both on knowledge items (pre-test: median = 5, mean = 4.78, 95% CI (4.55–5.01); post-test: median = 5, mean = 5.12, 95% CI (4.90–5.43); p-value = 0.003) and clinical reasoning items (pre-test: median = 5, mean = 5.3 95%, CI (4.98–5.58); post-test: median = 8, mean = 7.81, 95% CI (7.57–8.05); p-value < 0.001). Thus, VPS programs could help medical students improve their clinical decision-making skills without lecturer supervision.


2021 ◽  
pp. 204275302098701
Author(s):  
Ünal Çakıroğlu ◽  
Mustafa Güler

This study attempts to determine whether gamification can be used as a pedagogical technique to overcome the challenges in teaching statistics. A post-test quasi-experimental design was carried out in gamified and non-gamified groups in order to reveal the effect of gamification elements in cultivating students’ statistical literacy skills. Students in gamified group were also interviewed to understand the function of gamification process. The results suggest that; although gamifying the instructional process had a positive impact on developing students’ statistical literacy in medium and high score students; surprisingly the influence of the gamification to the low- achieved scores were not positive. The positive impact was discussed in accordance with the gradual structure of statistical literacy and suggestions for successful gamification applications due to the context were included.


Author(s):  
Jordan D. Tayce ◽  
Ashley B. Saunders

The development of clinical reasoning skills is a high priority during clinical service, but an unpredictable case load and limited time for formal instruction makes it challenging for faculty to foster and assess students’ individual clinical reasoning skills. We developed an assessment for learning activity that helps students build their clinical reasoning skills based on a modified version of the script concordance test (SCT). To modify the standard SCT, we simplified it by limiting students to a 3-point Likert scale instead of a 5-point scale and added a free-text box for students to provide justification for their answer. Students completed the modified SCT during clinical rounds to prompt a group discussion with the instructor. Student feedback was positive, and the instructor gained valuable insight into the students’ thought process. A modified SCT can be adopted as part of a multimodal approach to teaching on the clinic floor. The purpose of this article is to describe our modifications to the standard SCT and findings from implementation in a clinical rounds setting as a method of formative assessment for learning and developing clinical reasoning skills.


Diagnosis ◽  
2018 ◽  
Vol 5 (4) ◽  
pp. 197-203 ◽  
Author(s):  
Satid Thammasitboon ◽  
Joseph J. Rencic ◽  
Robert L. Trowbridge ◽  
Andrew P.J. Olson ◽  
Moushumi Sur ◽  
...  

Abstract Background Excellence in clinical reasoning is one of the most important outcomes of medical education programs, but assessing learners’ reasoning to inform corrective feedback is challenging and unstandardized. Methods The Society to Improve Diagnosis in Medicine formed a multi-specialty team of medical educators to develop the Assessment of Reasoning Tool (ART). This paper describes the tool development process. The tool was designed to facilitate clinical teachers’ assessment of learners’ oral presentation for competence in clinical reasoning and facilitate formative feedback. Reasoning frameworks (e.g. script theory), contemporary practice goals (e.g. high-value care [HVC]) and proposed error reduction strategies (e.g. metacognition) were used to guide the development of the tool. Results The ART is a behaviorally anchored, three-point scale assessing five domains of reasoning: (1) hypothesis-directed data gathering, (2) articulation of a problem representation, (3) formulation of a prioritized differential diagnosis, (4) diagnostic testing aligned with HVC principles and (5) metacognition. Instructional videos were created for faculty development for each domain, guided by principles of multimedia learning. Conclusions The ART is a theory-informed assessment tool that allows teachers to assess clinical reasoning and structure feedback conversations.


Sign in / Sign up

Export Citation Format

Share Document