scholarly journals Item analysis of multiple choice questions: Assessing an assessment tool in medical students

Author(s):  
Chandrika Rao ◽  
HL Kishan Prasad ◽  
K Sajitha ◽  
Harish Permi ◽  
Jayaprakash Shetty
Author(s):  
Chandrika Rao ◽  
HL Kishan Prasad ◽  
K Sajitha ◽  
Harish Permi ◽  
Jayaprakash Shetty

2019 ◽  
Vol 5 (1) ◽  
pp. e000495
Author(s):  
Danielle L Cummings ◽  
Matthew Smith ◽  
Brian Merrigan ◽  
Jeffrey Leggit

BackgroundMusculoskeletal (MSK) complaints comprise a large proportion of outpatient visits. However, multiple studies show that medical school curriculum often fails to adequately prepare graduates to diagnose and manage common MSK problems. Current standardised exams inadequately assess trainees’ MSK knowledge and other MSK-specific exams such as Freedman and Bernstein’s (1998) exam have limitations in implementation. We propose a new 30-question multiple choice exam for graduating medical students and primary care residents. Results highlight individual deficiencies and identify areas for curriculum improvement.Methods/ResultsWe developed a bank of multiple choice questions based on 10 critical topics in MSK medicine. The questions were validated with subject-matter experts (SMEs) using a modified Delphi method to obtain consensus on the importance of each question. Based on the SME input, we compiled 30 questions in the assessment. Results of the large-scale pilot test (167 post-clerkship medical students) were an average score of 74 % (range 53% – 90 %, SD 7.8%). In addition, the tool contains detailed explanations and references were created for each question to allow an individual or group to review and enhance learning.SummaryThe proposed MSK30 exam evaluates clinically important topics and offers an assessment tool for clinical MSK knowledge of medical students and residents. It fills a gap in current curriculum and improves on previous MSK-specific assessments through better clinical relevance and consistent grading. Educators can use the results of the exam to guide curriculum development and individual education.


2021 ◽  
Vol 77 ◽  
pp. S85-S89
Author(s):  
Dharmendra Kumar ◽  
Raksha Jaipurkar ◽  
Atul Shekhar ◽  
Gaurav Sikri ◽  
V. Srinivas

Author(s):  
Amani H. Elgadal ◽  
Abdalbasit A. Mariod

Background: Integration of assessment with education is vital and ought to be performed regularly to enhance learning. There are many assessment methods like Multiple-choice Questions, Objective Structured Clinical Examination, Objective Structured Practical Examination, etc. The selection of the appropriate method is based on the curricula blueprint and the target competencies. Although MCQs has the capacity to test students’ higher cognition, critical appraising, problem-solving, data interpretation, and testing curricular contents in a short time, there are constraints in its analysis. The authors aim to accentuate some consequential points about psychometric analysis displaying its roles, assessing its validity and reliability in discriminating the examinee’s performance, and impart some guide to the faculty members when constructing their exam questions bank. Methods: Databases such as Google Scholar and PubMed were searched for freely accessible English articles published since 2010. Synonyms and keywords were used in the search. First, the abstracts of the articles were viewed and read to select suitable match, then full articles were perused and summarized. Finally, recapitulation of the relevant data was done to the best of the authors’ knowledge. Results: The searched articles showed the capacity of MCQs item analysis in assessing questions’ validity, reliability, its capacity in discriminating against the examinee’s performance and correct technical flaws for question bank construction. Conclusion: Item analysis is a statistical tool used to assess students’ performance on a test, identify underperformed items, and determine the root causes of this underperformance for improvement to ensure effective and accurate students’ competency judgment. Keywords: assessment, difficulty index, discrimination index, distractors, MCQ item analysis


2021 ◽  
Author(s):  
S. Mehran Hosseini ◽  
Reza Rahmati ◽  
Hamid Sepehri ◽  
Vahid Tajari ◽  
Mahdi Habibi-koolaee

Abstract Background: The purpose of this pilot was to compare the multiple-choice test statistics of medical and dental students' exams between free and tuition-paying.Methods: This descriptive-analytical study was conducted at Golestan University of Medical Sciences in Iran in 2020. The study population included students of medicine and dentistry. A total of 56 exams were selected in two student groups of free and tuition-paying admission in the physiology course. The results of quantitative evaluation of tests were used as the data of this study. The variables included difficulty index, discrimination index, the degree of difficulty, score variance, and Kuder-Richardson correlation coefficient. Results: There were 32 medical and 24 dentistry exams. The cumulative total number of questions in these exams was 437 and 330 multiple choice questions, respectively. The number of medical students participating in the free-tuition and paying-tuition admissions was 1336 and 1076, and for dental students, these numbers were 395 and 235, respectively. There were no significant differences in normalized adjusted exams scores between two admission groups in both medical and dentistry tests. The mean of discrimination index in the free-tuition group was higher than in the paying-tuition group. The interaction between the type of admission and the field of study was significant for the discrimination index. This difference was more in tuition-free dental students than tuition-free medical students and tuition-paying dental students. Conclusion: The type of student admission has no significant effect on student assessments in multiple-choice exams in matched educational conditions.


Author(s):  
Suryakar Vrushali Prabhunath ◽  
Surekha T. Nemade ◽  
Ganesh D. Ghuge

Introduction: Multiple Choice Questions (MCQs) is one of the most preferred tool of assessment in medical education as a part of formative as well as summative assessment. MCQ performance as an assessment tool can be statistically analysed by Item analysis. Thus, aim of this study is to assess the quality of MCQs by item analysis and identify the valid test items to be included in the question bank for further use. Materials and methods: Formative assessment of Ist MBBS students was carried out with 40 MCQs as a part of internal examination in Biochemistry. Item analysis was done by calculating Difficulty index (P), Discrimination index (d) and number of Non- functional distractors. Results: Difficulty index (P) of 65% (26) items was well within acceptable range, 7.5% (3) items were too difficult whereas 27.5% (11) items were in the category of too easy. Discrimination Index (d) of 70% (28) items fell in recommended category whereas 10% (4) items were with acceptable, and 20% (8) were with poor Discrimination index. Out of 120 distractors 88.33% (106) were functional distractors and 11.66% (14) were non-functional distractors. After considering difficulty index, discrimination index and distractor effectiveness, 42.5% (17) items were found ideal to be included in the question bank. Conclusion: Item analysis remains an essential tool to be practiced regularly to improve the quality of the assessment methods as well as a tool for obtaining feedback for the instructors. Key Words: Difficulty index, Discrimination index, Item analysis, Multiple choice questions, Non-functional distractors


Author(s):  
Umayya Musharrafieh ◽  
Khalil Ashkar ◽  
Dima Dandashi ◽  
Maya Romani ◽  
Rana Houry ◽  
...  

Introduction: Objective Structured Clinical Examination (OSCE) is considered a useful method of assessing clinical skills besides Multiple Choice Questions (MCQs) and clinical evaluations. Aim: To explore the acceptance of medical students to this assessment tool in medical education and to determine whether the assessment results of MCQs and faculty clinical evaluations agree with the respective OSCE scores of 4th year medical students (Med IV). Methods: performance of a total of 223 Med IV students distributed on academic years 2006-2007, 2007-2008, and 2008-2009 in OSCE, MCQs and faculty evaluations were compared. Out of the total 93 students were asked randomly to fill a questionnaire about their attitudes and acceptance of this tool. The OSCE was conducted every two months for two different groups of medical students who had completed their family medicine rotation, while faculty evaluation based on observation by assessors was submitted on a monthly basis upon the completion of the rotation. The final exam for the family medicine clerkship was performed at the end of the 4thacademic year, and it consisted of MCQsResults: Students highly commended the OSCE as a tool of evaluation by faculty members as it provides a true measure of required clinical skills and communication skills compared to MCQs and faculty evaluation. The study showed a significant positive correlation between the OSCE scores and the clinical evaluation scores while there was no association between the OSCE score and the final exam scores.Conclusion: Student showed high appreciation and acceptance of this type of clinical skills testing. Despite the fact that OSCEs make them more stressed than other modalities of assessment, it remained the preferred one.


2021 ◽  
pp. 9-10
Author(s):  
Bhoomika R. Chauhan ◽  
Jayesh Vaza ◽  
Girish R. Chauhan ◽  
Pradip R. Chauhan

Multiple choice questions are nowadays used in competitive examination and formative assessment to assess the student's eligibility and certification.Item analysis is the process of collecting,summarizing and using information from students' responses to assess the quality of test items.Goal of the study was to identify the relationship between the item difficulty index and item discriminating index in medical student's assessment. 400 final year medical students from various medical colleges responded 200 items constructed for the study.The responses were assessed and analysed for item difficulty index and item discriminating power. Item difficulty index an item discriminating power were analysed by statical methods to identify correlation.The discriminating power of the items with difficulty index in 40%-50% was the highest. Summary and Conclusion:Items with good difficulty index in range of 30%-70% are good discriminator.


Author(s):  
Netravathi B. Angadi ◽  
Amitha Nagabhushana ◽  
Nayana K. Hashilkar

Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and Distractor efficiency (DE). Item analysis is a valuable yet relatively simple procedure, performed after the examination that provides information regarding the reliability and validity of a test item. The objective of this study was to perform an item analysis of MCQs for testing their validity parameters.Methods: 50 items consisting of 150 distractors were selected from the formative exams. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analysed for three parameters such as DIF I, DI, and DE.Results: A total of 50 items consisting of 150 Distractor s were analysed. DIF I of 31 (62%) items were in the acceptable range (DIF I= 30-70%) and 30 had ‘good to excellent’ (DI >0.25). 10 (20%) items were too easy and 9 (18%) items were too difficult (DIF I <30%). There were 4 items with 6 non-functional Distractor s (NFDs), while the rest 46 items did not have any NFDs.Conclusions: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard or modify the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity.


Sign in / Sign up

Export Citation Format

Share Document