scholarly journals Are multiple-choice questions a good tool for the assessment of clinical competence in Internal Medicine?

2018 ◽  
Vol 12 (2) ◽  
pp. 88 ◽  
Author(s):  
Flavio Tangianu ◽  
Antonino Mazzone ◽  
Franco Berti ◽  
Giuliano Pinna ◽  
Irene Bortolotti ◽  
...  

There are many feasible tools for the assessment of clinical practice, but there is a wide consensus on the fact that the simultaneous use of several different methods could be strategic for a comprehensive overall judgment of clinical competence. Multiple-choice questions (MCQs) are a well-established reliable method of assessing knowledge. Constructing effective MCQ tests and items requires scrupulous care in the design, review and validation stages. Creating high-quality multiple-choice questions requires a very deep experience, knowledge and large amount of time. Hereby, after reviewing their construction, strengths and limitations, we debate their completeness for the assessment of professional competence.

Blood ◽  
2021 ◽  
Vol 138 (Supplement 1) ◽  
pp. 4935-4935
Author(s):  
Jori E. May ◽  
Rita D Paschal ◽  
Jason L Morris ◽  
Lisa L Willett

Abstract Introduction: Choosing Wisely ® (CW ®)is an initiative of the American Board of Internal Medicine Foundation created to guide the selection of care that is 1) supported by evidence, 2) not duplicative, 3) free from harm, and 4) truly necessary. Between 2013-14, the American Society of Hematology (ASH) published 10 recommendations in accordance with CW ® principles relevant to hematologic care. Previous studies have demonstrated that clinical exposure to non-malignant hematology (NMH) improves trainee understanding of evidence-based, cost effective care as outlined by ASH CW ®. However, dedicated clinical rotations in NMH for internal medicine (IM) residents are not consistently available. Therefore, we created a condensed educational curricular experience using a small group, case-based structure designed to teach the fundamentals of ASH CW ® in NMH to first year IM residents. With a pilot of 6 sessions, we investigated resident baseline knowledge, evaluated the curricular session, and assessed knowledge retention. Methods: The educational intervention focused on 3 content areas in ASH CW ®: venous thromboembolism (VTE), heparin-induced thrombocytopenia (HIT), and sickle cell disease (SCD) (Table 1). Participants included 21 first year IM residents at a large academic medical center. A 1-hour small group teaching session was scheduled monthly as an assigned didactic for the 2020-21 academic year. A total of 6 sessions were provided, and each session had 2-4 residents assigned. The first 4 sessions were in-person and the final 2 were virtual due to the COVID-19 pandemic. The first author was the instructor at all sessions. To assess baseline knowledge of the 3 content areas, participants completed an online assessment with 5 case-based multiple-choice questions at the beginning of the session. The instructor then guided participants to again complete the questionnaire together, now using internet access via a personal computer and a recommended list of online resources, including ASH Clinical Practice Guidelines and Pocket Guides. The instructor then led discussion on how each correct answer or guideline recommendation achieves the 4 CW ® principles. At the conclusion of the session, participants completed an online survey to evaluate the educational intervention using a modified Likert scale. To assess knowledge retention, participants received the original online multiple-choice assessment by email 3 months later. Results: All participants (21/21, 100%) completed the baseline knowledge assessment. The average number of questions correct out of 5 total was 3.3 (67%), with a range of correct answers from 1 to 5. Table 2 includes the content area of each question and the number of correct responses. The question with the lowest total correct (9/21, 43%) addressed the use of transfusion in an uncomplicated pain crisis in SCD. Seventeen participants (81%) completed the curricular evaluation. All respondents (17/17, 100%) either agreed or strongly agreed 1) that the session filled a gap in their NMH training and 2) that they learned something that would change their clinical practice. Only 1 participant (5%) reported completing a rotation in NMH prior to the session. Six participants (29%) completed the repeat knowledge assessment at 3 months. All respondents (100%) achieved a perfect score on the multiple choice questions. When asked if the knowledge gained had influenced their clinical practice, 3 (50%) strongly agreed, 2 (33%) agreed, 1 (17%) was neutral, and none disagreed or strongly disagreed. Conclusion: Our results demonstrate a successful educational pilot to improve the knowledge of ASH CW ® initiatives in NMH for first year IM residents using small group interactive case-based learning. Participants were overwhelmingly receptive to this intervention, expressed high satisfaction and confirmed that the session positively influenced their clinical practice. Although participation in the repeat assessment of knowledge was limited, those that did participate demonstrated high knowledge retention. We intend to expand this pilot initiative by providing the educational session for all incoming IM residents at our institution. We then plan to assess its impact on clinical practice (i.e. use of transfusion in SCD, use of thrombophilia testing, documentation of 4Ts score calculation) to apply the principles of ASH CW ® for improved patient care. Figure 1 Figure 1. Disclosures No relevant conflicts of interest to declare.


Best of Five MCQs for the Acute Medicine SCE is a new revision resource designed specifically for this high-stakes exam. Containing over 350 Best of Five multiple choice questions, this dedicated guide will help candidates to prepare successfully. The content mirrors the SCE in Acute Medicine Blueprint to ensure candidates are fully prepared for all the topics that may appear in the exam. Topics range from how to manage acute problems in cardiology or neurology to managing acute conditions such as poisoning. All answers have full explanations and further reading to ensure high quality self-assessment and quick recognition of areas that require further study.


‘Multiple Choice Questions in Musculoskeletal, Sport & Exercise Medicine’ is a compilation of 400 multiple choice questions (MCQs), where the format is that of single best answer from a choice of five options. The book closely follows the curriculum of the ‘Membership of Faculty of Sport & Exercise Medicine’ (MFSEM) examination, with some questions being clinically oriented and others being knowledge based. This book is not intended to be a substitute for extensive clinical reading but instead to complement the learning process. Questions in this book have been carefully curated by 92 reputable subject matter experts across ten countries and are intended to provide a structured learning experience. The book is comprised of 46 chapters, where the first 23 ask questions and the next 23 provide answers. The answer to each question has a short explanation with a reference, which is intended to stimulate discussion, research and further learning. There is a total of 33 high quality images (MRI scans, plain radiographs, ECGs, ultrasound scans and photographs), 18 tables and 5 diagrams in the book.


1979 ◽  
Vol 1 (2) ◽  
pp. 24-33 ◽  
Author(s):  
James R. McMillan

Most educators agree that classroom evaluation practices need improvement. One way to improve testing is to use high-quality objective multiple-choice exams. Almost any understanding or ability which can be tested by another test form can also be tested by means of multiple-choice items. Based on a survey of 173 respondents, it appears that marketing teachers are disenchanted with multiple-choice questions and use them sparingly. Further, their limited use is largely in the introductory marketing course even though there are emerging pressures for universities to take a closer look at the quality of classroom evaluation at all levels.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Nahid Tabibzadeh ◽  
Jimmy Mullaert ◽  
Lara Zafrani ◽  
Pauline Balagny ◽  
Justine Frija-Masson ◽  
...  

Abstract Background Multiple-choice question (MCQ) tests are commonly used to evaluate medical students, but they do not assess self-confidence nor penalize lucky guess or harmful behaviors. Based on a scoring method according to the appropriateness of confidence in answers, the study aimed at assessing knowledge self-monitoring and efficiency, and the determinants of self-confidence. Methods A cross-sectional study of 842 s- and third-year medical students who were asked to state their level of confidence (A: very confident, B: moderately confident and C: not confident) during 12 tests (106,806 events). A bonus was applied if the level of confidence matched with the correctness of the answer, and a penalty was applied in the case of inappropriate confidence. Results Level A was selected more appropriately by the top 20% students whereas level C was selected more appropriately by the lower 20% students. Efficiency of higher-performing students was higher when correct (among correct answers, rate of A statement), but worse when incorrect compared to the bottom 20% students (among incorrect answers, rate of C statement). B and C statements were independently associated with female and male gender, respectively (OR for male vs female = 0.89 [0.82–0.96], p = 0.004, for level B and 1.15 [1.01–1.32], p = 0.047, for level C). Conclusion While both addressing the gender confidence gap, knowledge self-monitoring might improve awareness of students’ knowledge whereas efficiency might evaluate appropriate behavior in clinical practice. These results suggest differential feedback during training in higher versus lower-performing students, and potentially harmful behavior in decision-making during clinical practice in higher-performing students.


Author(s):  
Jimmy Bourque ◽  
Haley Skinner ◽  
Jonathan Dupré ◽  
Maria Bacchus ◽  
Martha Ainslie ◽  
...  

Purpose: This study aimed to assess the performance of the Ebel standard-setting method for the spring 2019 Royal College of Physicians and Surgeons of Canada internal medicine certification examination consisting of multiple-choice questions. Specifically, the following parameters were evaluated: inter-rater agreement, the correlations between Ebel scores and item facility indices, the impact of raters’ knowledge of correct answers on the Ebel score, and the effects of raters’ specialty on inter-rater agreement and Ebel scores.Methods: Data were drawn from a Royal College of Physicians and Surgeons of Canada certification exam. The Ebel method was applied to 203 multiple-choice questions by 49 raters. Facility indices came from 194 candidates. We computed the Fleiss kappa and the Pearson correlations between Ebel scores and item facility indices. We investigated differences in the Ebel score according to whether correct answers were provided or not and differences between internists and other specialists using the t-test.Results: The Fleiss kappa was below 0.15 for both facility and relevance. The correlation between Ebel scores and facility indices was low when correct answers were provided and negligible when they were not. The Ebel score was the same whether the correct answers were provided or not. Inter-rater agreement and Ebel scores were not significantly different between internists and other specialists.Conclusion: Inter-rater agreement and correlations between item Ebel scores and facility indices were consistently low; furthermore, raters’ knowledge of the correct answers and raters’ specialty had no effect on Ebel scores in the present setting.


2015 ◽  
Vol 28 (3) ◽  
pp. 194 ◽  
Author(s):  
Tahra AlMahmoud ◽  
MargaretAnn Elzubeir ◽  
Sami Shaban ◽  
Frank Branicki

Sign in / Sign up

Export Citation Format

Share Document