scholarly journals Item analysis of multiple choice questions from an assessment of medical students in Bhubaneswar, India

Author(s):  
Surya Namdeo ◽  
Bandya Sahoo
2021 ◽  
Author(s):  
S. Mehran Hosseini ◽  
Reza Rahmati ◽  
Hamid Sepehri ◽  
Vahid Tajari ◽  
Mahdi Habibi-koolaee

Abstract Background: The purpose of this pilot was to compare the multiple-choice test statistics of medical and dental students' exams between free and tuition-paying.Methods: This descriptive-analytical study was conducted at Golestan University of Medical Sciences in Iran in 2020. The study population included students of medicine and dentistry. A total of 56 exams were selected in two student groups of free and tuition-paying admission in the physiology course. The results of quantitative evaluation of tests were used as the data of this study. The variables included difficulty index, discrimination index, the degree of difficulty, score variance, and Kuder-Richardson correlation coefficient. Results: There were 32 medical and 24 dentistry exams. The cumulative total number of questions in these exams was 437 and 330 multiple choice questions, respectively. The number of medical students participating in the free-tuition and paying-tuition admissions was 1336 and 1076, and for dental students, these numbers were 395 and 235, respectively. There were no significant differences in normalized adjusted exams scores between two admission groups in both medical and dentistry tests. The mean of discrimination index in the free-tuition group was higher than in the paying-tuition group. The interaction between the type of admission and the field of study was significant for the discrimination index. This difference was more in tuition-free dental students than tuition-free medical students and tuition-paying dental students. Conclusion: The type of student admission has no significant effect on student assessments in multiple-choice exams in matched educational conditions.


Author(s):  
Chandrika Rao ◽  
HL Kishan Prasad ◽  
K Sajitha ◽  
Harish Permi ◽  
Jayaprakash Shetty

Author(s):  
Chandrika Rao ◽  
HL Kishan Prasad ◽  
K Sajitha ◽  
Harish Permi ◽  
Jayaprakash Shetty

2021 ◽  
pp. 9-10
Author(s):  
Bhoomika R. Chauhan ◽  
Jayesh Vaza ◽  
Girish R. Chauhan ◽  
Pradip R. Chauhan

Multiple choice questions are nowadays used in competitive examination and formative assessment to assess the student's eligibility and certification.Item analysis is the process of collecting,summarizing and using information from students' responses to assess the quality of test items.Goal of the study was to identify the relationship between the item difficulty index and item discriminating index in medical student's assessment. 400 final year medical students from various medical colleges responded 200 items constructed for the study.The responses were assessed and analysed for item difficulty index and item discriminating power. Item difficulty index an item discriminating power were analysed by statical methods to identify correlation.The discriminating power of the items with difficulty index in 40%-50% was the highest. Summary and Conclusion:Items with good difficulty index in range of 30%-70% are good discriminator.


Author(s):  
Netravathi B. Angadi ◽  
Amitha Nagabhushana ◽  
Nayana K. Hashilkar

Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and Distractor efficiency (DE). Item analysis is a valuable yet relatively simple procedure, performed after the examination that provides information regarding the reliability and validity of a test item. The objective of this study was to perform an item analysis of MCQs for testing their validity parameters.Methods: 50 items consisting of 150 distractors were selected from the formative exams. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analysed for three parameters such as DIF I, DI, and DE.Results: A total of 50 items consisting of 150 Distractor s were analysed. DIF I of 31 (62%) items were in the acceptable range (DIF I= 30-70%) and 30 had ‘good to excellent’ (DI >0.25). 10 (20%) items were too easy and 9 (18%) items were too difficult (DIF I <30%). There were 4 items with 6 non-functional Distractor s (NFDs), while the rest 46 items did not have any NFDs.Conclusions: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard or modify the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity.


2021 ◽  
Author(s):  
Joachim Neumann ◽  
Stephanie Simmrodt ◽  
Beatrice Bader ◽  
Bertram Opitz ◽  
Ulrich Gergs

BACKGROUND There remain doubts about whether multiple choice answer formats (single choice) offer the best option to encourage deep learning or whether SC formats simply lead to superficial learning or cramming. Moreover, cueing is always a drawback in the SC format. Another way to assess knowledge is true multiple-choice questions in which one or more answers can be true and the student is not aware of how many true answers are to be anticipated (K´ or Kprime question format). OBJECTIVE Here, we compared both single-choice answers (one true answer, SC) with Kprime answers (one to four true answers out of four answers, Kprime) for the very same learning objectives in a study of pharmacology in medical students. METHODS Two groups of medical students were randomly subjected to a formative online test: group A) was first given 15 SC (#1-15) followed by 15 different Kprime questions (#16-30). The opposite design was used for group B. RESULTS The mean number of right answers was higher for SC than for Kprime questions in group A (10.02 vs. 8.63, p < 0.05) and group B (9.98 vs. 6.66, p < 0.05). The number of right answers was higher for nine questions of SC compared to Kprime in group A and for eight questions in group B (pairwise T-Test, p < 0.05). Thus, SC is easier to answer than the same learning objectives in pharmacology given as Kprime questions. One year later, four groups were formed from the previous two groups and were again given the same online test but in a different order: the main result was that all students fared better in the second test than in the initial test; however, the gain in points was highest if initially mode B was given. CONCLUSIONS Kprime is less popular with students being more demanding, but could improve memory of subject matter and thus might be more often used by meidcal educators.


2019 ◽  
Vol 5 (1) ◽  
pp. e000495
Author(s):  
Danielle L Cummings ◽  
Matthew Smith ◽  
Brian Merrigan ◽  
Jeffrey Leggit

BackgroundMusculoskeletal (MSK) complaints comprise a large proportion of outpatient visits. However, multiple studies show that medical school curriculum often fails to adequately prepare graduates to diagnose and manage common MSK problems. Current standardised exams inadequately assess trainees’ MSK knowledge and other MSK-specific exams such as Freedman and Bernstein’s (1998) exam have limitations in implementation. We propose a new 30-question multiple choice exam for graduating medical students and primary care residents. Results highlight individual deficiencies and identify areas for curriculum improvement.Methods/ResultsWe developed a bank of multiple choice questions based on 10 critical topics in MSK medicine. The questions were validated with subject-matter experts (SMEs) using a modified Delphi method to obtain consensus on the importance of each question. Based on the SME input, we compiled 30 questions in the assessment. Results of the large-scale pilot test (167 post-clerkship medical students) were an average score of 74 % (range 53% – 90 %, SD 7.8%). In addition, the tool contains detailed explanations and references were created for each question to allow an individual or group to review and enhance learning.SummaryThe proposed MSK30 exam evaluates clinically important topics and offers an assessment tool for clinical MSK knowledge of medical students and residents. It fills a gap in current curriculum and improves on previous MSK-specific assessments through better clinical relevance and consistent grading. Educators can use the results of the exam to guide curriculum development and individual education.


Sign in / Sign up

Export Citation Format

Share Document