scholarly journals Effects of Apophenia on Multiple-Choice Exam Performance

SAGE Open ◽  
2014 ◽  
Vol 4 (4) ◽  
pp. 215824401455662 ◽  
Author(s):  
Stephen T. Paul ◽  
Samantha Monda ◽  
S. Maria Olausson ◽  
Brenna Reed-Daley
2008 ◽  
Vol 9 (3-4) ◽  
pp. 184-195 ◽  
Author(s):  
Katherine R. Krohn ◽  
Megan R. Parker ◽  
Lisa N. Foster ◽  
Kathleen B. Aspiranti ◽  
Daniel F. McCleary ◽  
...  

2014 ◽  
Vol 20 (1) ◽  
pp. 3-21 ◽  
Author(s):  
Kathleen B. McDermott ◽  
Pooja K. Agarwal ◽  
Laura D'Antonio ◽  
Henry L. Roediger ◽  
Mark A. McDaniel

2019 ◽  
Vol 5 (1) ◽  
pp. e000495
Author(s):  
Danielle L Cummings ◽  
Matthew Smith ◽  
Brian Merrigan ◽  
Jeffrey Leggit

BackgroundMusculoskeletal (MSK) complaints comprise a large proportion of outpatient visits. However, multiple studies show that medical school curriculum often fails to adequately prepare graduates to diagnose and manage common MSK problems. Current standardised exams inadequately assess trainees’ MSK knowledge and other MSK-specific exams such as Freedman and Bernstein’s (1998) exam have limitations in implementation. We propose a new 30-question multiple choice exam for graduating medical students and primary care residents. Results highlight individual deficiencies and identify areas for curriculum improvement.Methods/ResultsWe developed a bank of multiple choice questions based on 10 critical topics in MSK medicine. The questions were validated with subject-matter experts (SMEs) using a modified Delphi method to obtain consensus on the importance of each question. Based on the SME input, we compiled 30 questions in the assessment. Results of the large-scale pilot test (167 post-clerkship medical students) were an average score of 74 % (range 53% – 90 %, SD 7.8%). In addition, the tool contains detailed explanations and references were created for each question to allow an individual or group to review and enhance learning.SummaryThe proposed MSK30 exam evaluates clinically important topics and offers an assessment tool for clinical MSK knowledge of medical students and residents. It fills a gap in current curriculum and improves on previous MSK-specific assessments through better clinical relevance and consistent grading. Educators can use the results of the exam to guide curriculum development and individual education.


2015 ◽  
Vol 39 (4) ◽  
pp. 327-334 ◽  
Author(s):  
Brandon M. Franklin ◽  
Lin Xiang ◽  
Jason A. Collett ◽  
Megan K. Rhoads ◽  
Jeffrey L. Osborn

Student populations are diverse such that different types of learners struggle with traditional didactic instruction. Problem-based learning has existed for several decades, but there is still controversy regarding the optimal mode of instruction to ensure success at all levels of students' past achievement. The present study addressed this problem by dividing students into the following three instructional groups for an upper-level course in animal physiology: traditional lecture-style instruction (LI), guided problem-based instruction (GPBI), and open problem-based instruction (OPBI). Student performance was measured by three summative assessments consisting of 50% multiple-choice questions and 50% short-answer questions as well as a final overall course assessment. The present study also examined how students of different academic achievement histories performed under each instructional method. When student achievement levels were not considered, the effects of instructional methods on student outcomes were modest; OPBI students performed moderately better on short-answer exam questions than both LI and GPBI groups. High-achieving students showed no difference in performance for any of the instructional methods on any metric examined. In students with low-achieving academic histories, OPBI students largely outperformed LI students on all metrics (short-answer exam: P < 0.05, d = 1.865; multiple-choice question exam: P < 0.05, d = 1.166; and final score: P < 0.05, d = 1.265). They also outperformed GPBI students on short-answer exam questions ( P < 0.05, d = 1.109) but not multiple-choice exam questions ( P = 0.071, d = 0.716) or final course outcome ( P = 0.328, d = 0.513). These findings strongly suggest that typically low-achieving students perform at a higher level under OPBI as long as the proper support systems (formative assessment and scaffolding) are provided to encourage student success.


2020 ◽  
Vol 302 (6) ◽  
pp. 1401-1406
Author(s):  
Sebastian M. Jud ◽  
Susanne Cupisti ◽  
Wolfgang Frobenius ◽  
Andrea Winkler ◽  
Franziska Schultheis ◽  
...  

Semantic Web ◽  
2020 ◽  
pp. 1-17
Author(s):  
Ghader Kurdi ◽  
Jared Leo ◽  
Nicolas Matentzoglu ◽  
Bijan Parsia ◽  
Uli Sattler ◽  
...  

Successful exams require a balance of easy, medium, and difficult questions. Question difficulty is generally either estimated by an expert or determined after an exam is taken. The latter provides no utility for the generation of new questions and the former is expensive both in terms of time and cost. Additionally, it is not known whether expert prediction is indeed a good proxy for estimating question difficulty. In this paper, we analyse and compare two ontology-based measures for difficulty prediction of multiple choice questions, as well as comparing each measure with expert prediction (by 15 experts) against the exam performance of 12 residents over a corpus of 231 medical case-based questions that are in multiple choice format. We find one ontology-based measure (relation strength indicativeness) to be of comparable performance (accuracy = 47%) to expert prediction (average accuracy = 49%).


Sign in / Sign up

Export Citation Format

Share Document