scholarly journals Study on item and test analysis of multiple choice questions amongst undergraduate medical students

Author(s):  
Abhijeet S. Ingale ◽  
Purushottam A. Giri ◽  
Mohan K. Doibale

Background: Item analysis is the process of collecting, summarizing and using information from students’ response to assess the quality of test items. However it is said that MCQs emphasize recall of factual information rather than conceptual understanding and interpretation of concepts. There is more to writing good MCQs than writing good questions. The objectives of the study was to assess the item and test quality of multiple choice questions and to deal with the learning difficulties of students, identify the low achievers in the test. Methods: The hundred MBBS students from Government medical college were examined. A test comprising of thirty MCQs was administered. All items were analysed for Difficulty Index, Discrimination Index and Distractor Efficiency. Data entered in MS Excel 2007 and SPSS 21 analysed with statistical test of significance. Results: Majority 80% items difficulty index is within acceptable range. 63% items showed excellent discrimination Index. Distractor efficiency was overall satisfactory. Conclusions: Multiple choice questions with average difficulty and also having high discriminating power with good distracter efficiency should be incorporated into student’s examination. 

Author(s):  
Manju K. Nair ◽  
Dawnji S. R.

Background: Carefully constructed, high quality multiple choice questions can serve as effective tools to improve standard of teaching. This item analysis was performed to find the difficulty index, discrimination index and number of non functional distractors in single best response type questions.Methods: 40 single best response type questions with four options, each carrying one mark for the correct response, was taken for item analysis. There was no negative marking. The maximum marks was 40. Based on the scores, the evaluated answer scripts were arranged with the highest score on top and the least score at the bottom. Only the upper third and lower third were included. The response to each item was entered in Microsoft excel 2010. Difficulty index, Discrimination index and number of non functional distractors per item were calculated.Results: 40 multiple choice questions and 120 distractors were analysed in this study. 72.5% items were good with a difficulty index between 30%-70%. 25% items were difficult and 2.5% items were easy. 27.5% items showed excellent discrimination between high scoring and low scoring students. One item had a negative discrimination index (-0.1). There were 9 items with non functional distractors.Conclusions: This study emphasises the need for improving the quality of multiple choice questions. Hence repeated evaluation by item analysis and modification of non functional distractors may be performed to enhance standard of teaching in Pharmacology.


Author(s):  
Suryakar Vrushali Prabhunath ◽  
Surekha T. Nemade ◽  
Ganesh D. Ghuge

Introduction: Multiple Choice Questions (MCQs) is one of the most preferred tool of assessment in medical education as a part of formative as well as summative assessment. MCQ performance as an assessment tool can be statistically analysed by Item analysis. Thus, aim of this study is to assess the quality of MCQs by item analysis and identify the valid test items to be included in the question bank for further use. Materials and methods: Formative assessment of Ist MBBS students was carried out with 40 MCQs as a part of internal examination in Biochemistry. Item analysis was done by calculating Difficulty index (P), Discrimination index (d) and number of Non- functional distractors. Results: Difficulty index (P) of 65% (26) items was well within acceptable range, 7.5% (3) items were too difficult whereas 27.5% (11) items were in the category of too easy. Discrimination Index (d) of 70% (28) items fell in recommended category whereas 10% (4) items were with acceptable, and 20% (8) were with poor Discrimination index. Out of 120 distractors 88.33% (106) were functional distractors and 11.66% (14) were non-functional distractors. After considering difficulty index, discrimination index and distractor effectiveness, 42.5% (17) items were found ideal to be included in the question bank. Conclusion: Item analysis remains an essential tool to be practiced regularly to improve the quality of the assessment methods as well as a tool for obtaining feedback for the instructors. Key Words: Difficulty index, Discrimination index, Item analysis, Multiple choice questions, Non-functional distractors


Author(s):  
Richa Garg ◽  
Vikas Kumar ◽  
Jyoti Maria

Background: Assessment is a dominant motivator to direct and drive students learning. Different methods of assessment are used to assess medical knowledge in undergraduate medical education. Multiple choice questions (MCQs) are being used increasingly due to their higher reliability, validity, and ease of scoring. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), and distracter efficiency (DE).Methods: Students of second year MBBS appeared in a formative assessment test, that was comprised of 50 “One best response type” MCQs of 50 marks without negative marking. All MCQs were having single stem with four options including, one being correct answer and other three incorrect alternatives (distracter). Three question paper sets were prepared by disorganizing sequence of questions. One of the three paper sets was given to each student to avoid copying from neighboring students. Total 50 MCQs and 150 distracters were analyzed and indices like DIF I, DI, and DE were calculated.Results: Total Score of 87 students ranged from 17 to 48 (out of total 50). Mean for difficulty index (DIF I) (%) was 71.6+19.4. 28% MCQs were average and “recommended” (DIF I 30-70%). Mean for discrimination index (DI) was 0.3+0.17. 16% MCQs were “good” and 50% MCQs were in “excellent” criteria, while rests of the MCQs were “discard/poor” according to DI criteria. Mean for distracter efficiency (DE) (%) was 63.4+33.3. 90% of the items were having DE from 100 to 33%. It was found that MCQs with lower difficulty index (<70) were having higher distracter efficiency (93.8% vs. 6.2%, p=0.004).Conclusions: Item analysis provided necessary data for improvement in question formulation and helped in revising and improving the quality of items and test also. Questions having lower difficulty index (<70) were significantly associated with higher discrimination index (>0.15) and higher distractor efficiency.


Author(s):  
Ajeet Kumar Khilnani ◽  
Rekha Thaddanee ◽  
Gurudas Khilnani

<p class="abstract"><strong>Background:</strong> Multiple choice questions (MCQs) are routinely used for formative and summative assessment in medical education. Item analysis is a process of post validation of MCQ tests, whereby items are analyzed for difficulty index, discrimination index and distractor efficiency, to obtain a range of items of varying difficulty and discrimination indices. This study was done to understand the process of item analysis and analyze MCQ test so that a valid and reliable MCQ bank in otorhinolaryngology is developed.</p><p class="abstract"><strong>Methods:</strong> 158 students of 7<sup>th</sup> Semester were given an 8 item MCQ test. Based on the marks achieved, the high achievers (top 33%, 52 students) and low achievers (bottom 33%, 52 students) were included in the study. The responses were tabulated in Microsoft Excel Sheet and analyzed for difficulty index, discrimination index and distractor efficiency.  </p><p class="abstract"><strong>Results:</strong> The mean (SD) difficulty index (Diff-I) of 8 item test was 61.41% (11.81%). 5 items had a very good difficulty index (41% to 60%), while 3 items were easy (Diff-I &gt;60%). There was no item with Diff-I &lt;30%, i.e. a difficult item, in this test. The mean (SD) discrimination index (DI) of the test was 0.48 (0.15), and all items had very good discrimination indices of more than 0.25. Out of 24 distractors, 6 (25%) were non-functional distractors (NFDs). The mean (SD) distractor efficiency (DE) of the test was 74.62% (23.79%).</p><p class="abstract"><strong>Conclusions:</strong> Item analysis should be an integral and regular activity in each department so that a valid and reliable MCQ question bank is developed.</p>


Author(s):  
Amit P. Date ◽  
Archana S. Borkar ◽  
Rupesh T. Badwaik ◽  
Riaz A. Siddiqui ◽  
Tanaji R. Shende ◽  
...  

Background: Multiple choice questions (MCQs) are a common method for formative and summative assessment of medical students. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), distracter efficiency (DE). The objective of this study was to assess the quality of MCQs currently in use in pharmacology by item analysis and develop a MCQ bank with quality items.Methods: This cross-sectional study was conducted in 148 second year MBBS students at NKP Salve institute of medical sciences from January 2018 to August 2018. Forty MCQs twenty each from the two term examination of pharmacology were taken for item analysis A correct response to an item was awarded one mark and each incorrect response was awarded zero. Each item was analyzed using Microsoft excel sheet for three parameters such as DIF I, DI, and DE.Results: In present study mean and standard deviation (SD) for Difficulty index (%) Discrimination index (%) and Distractor efficiency (%) were 64.54±19.63, 0.26±0.16 and 66.54±34.59 respectively. Out of 40 items large number of MCQs has acceptable level of DIF (70%) and good in discriminating higher and lower ability students DI (77.5%). Distractor efficiency related to presence of zero or 1 non-functional distrator (NFD) is 80%.Conclusions: The study showed that item analysis is a valid tool to identify quality items which regularly incorporated can help to develop a very useful, valid and a reliable question bank.


2020 ◽  
Vol 19 (1) ◽  
Author(s):  
Surajit Kundu ◽  
Jaideo M Ughade ◽  
Anil R Sherke ◽  
Yogita Kanwar ◽  
Samta Tiwari ◽  
...  

Background: Multiple-choice questions (MCQs) are the most frequently accepted tool for the evaluation of comprehension, knowledge, and application among medical students. In single best response MCQs (items), a high order of cognition of students can be assessed. It is essential to develop valid and reliable MCQs, as flawed items will interfere with the unbiased assessment. The present paper gives an attempt to discuss the art of framing well-structured items taking kind help from the provided references. This article puts forth a practice for committed medical educators to uplift the skill of forming quality MCQs by enhanced Faculty Development programs (FDPs). Objectives: The objective of the study is also to test the quality of MCQs by item analysis. Methods: In this study, 100 MCQs of set I or set II were distributed to 200 MBBS students of Late Shri Lakhiram Agrawal Memorial Govt. Medical College Raigarh (CG) for item analysis for quality MCQs. Set I and Set II were MCQs which were formed by 60 medical faculty before and after FDP, respectively. All MCQs had a single stem with three wrong and one correct answers. The data were entered in Microsoft excel 2016 software to analyze. The difficulty index (Dif I), discrimination index (DI), and distractor efficiency (DE) were the item analysis parameters used to evaluate the impact on adhering to the guidelines for framing MCQs. Results: The mean calculated difficulty index, discrimination index, and distractor efficiency were 56.54%, 0.26, and 89.93%, respectively. Among 100 items, 14 items were of higher difficulty level (DIF I < 30%), 70 were of moderate category, and 16 items were of easy level (DIF I > 60%). A total of 10 items had very good DI (0.40), 32 had recommended values (0.30 - 0.39), and 25 were acceptable with changes (0.20 - 0.29). Of the 100 MCQs, there were 27 MCQs with DE of 66.66% and 11 MCQs with DE of 33.33%. Conclusions: In this study, higher cognitive-domain MCQs increased after training, recurrent-type MCQ decreased, and MCQ with item writing flaws reduced, therefore making our results much more statistically significant. We had nine MCQs that satisfied all the criteria of item analysis.


Author(s):  
Durgesh Prasad Sahoo ◽  
Rakesh Singh

Background: Multiple choice questions (MCQs) or Items forms an important part to assess students in different educational streams. It is an objective mode of assessment which requires both the validity and reliability depending on the characteristics of its items i.e. difficulty index, discrimination index and distracter efficiency. To evaluate MCQs or items and build a bank of high-quality test items by assessing with difficulty index, discrimination index and distracter efficiency and also to revise/store or remove errant items based on obtained results.Methods: A preliminary examination of Third MBBS Part-1 was conducted by Department of Community Medicine undertaken for 100 students. Two separate papers with total 30 MCQs or items and 90 distractors each in both papers were analyzed and compared. Descriptive as well as inferential statistics were used to analyze the data.Results: The findings show that most of the items were falling in acceptable range of difficulty level however some items were rejected due to poor discrimination index. Overall paper I was found to be more difficult and more discriminatory, but its distractor efficiency was slightly low as compared to paper II.Conclusions: The analysis helped us in selection of quality MCQs having high discrimination and average difficulty with three functional distractors. This should be incorporated into future evaluations to improve the test score and properly discriminate among the students.


2020 ◽  
Vol 10 (2) ◽  
Author(s):  
Imtiaz Uddin ◽  
Iftikhar Uddin ◽  
Izaz Ur Rehman ◽  
Muhammad Siyar ◽  
Usman Mehboob

Background: MCQs type assessment in medical education is replacing old theory style. There are concerns regarding the quality of the Multiple Choice Questions.Objectives: To determine the quality of Multiple Choice Questions by item analysis. Material and Methods: Study was a cross sectional descriptive .Fifty Multiple Choice Questions in the final internal evaluation exams in 2015 of Pharmacology at Bacha khan Medical College were analyzed. The quality of each Multiple Choice Questions item was assessed by the Difficulty index (Dif.I), Discriminative Index (D.I) and Distracter Efficiency (D.E).Results: Multiple Choice Questions that were of moderate difficulty were 66%. Easy were 4% and high difficulty were 30%.Reasons for high difficult Multiple Choice Questions were analyzed as Item Writing Flaws 41%, Irreverent Difficulty 36% and C2 level 23%. Discrimination Index shows that majority of MCQs were of Excellent Level (DI greater than 0.25) i.e 52 , Good 32% . (DI=2.15-0.25), Poor 16%. MCQs Distracter Effectiveness (DE)= 4, 3,2,1 were 52%, 34%, 14%, and 0% respectively. Conclusion: Item analysis gives us different parameters with reasons to recheck MCQ pool and teaching programme. High proportions of difficult and sizable amount of poor discriminative indices MCQs were the finding in this study and need to be resolved


Author(s):  
Donald S. Christian ◽  
Arpit C. Prajapati ◽  
Bhavik M. Rana ◽  
Viral R. Dave

Background: Multiple choice question (MCQ) assessments are becoming popular means to assess knowledge for many screening examinations among several fields including Medicine. The single best answer MCQs may also test higher-order thinking skills. Hence, MCQs remain useful assessment gadget. Objectives: 1) To evaluate Multiple Choice Questions for testing their quality. 2) To explore the association between difficulty index (p-value) and discrimination indices (DI) with distractor efficiency (DE). 3) To study the occurrence of functioning distractors for MCQs. Methods: Total five MCQ test sessions were conducted among interns of a medical institute of Ahmedabad city Gujarat, between April 2016 to March 2017, as part of their compulsory rotating postings in the department. The average participation in each of the sessions was 17 interns, thus a total of 85 interns getting enrolled. For each test session, the questionnaire consisted of forty MCQs having 4 options including a single best answer. The MCQs were analyzed for difficulty index (DIF-I, p-value), discrimination index (DI), and distractor efficiency (DE). Results: Total 85 interns attended the tests consisting of total 200 MCQ items (questions) from four major medical disciplines namely - Medicine, Surgery, Obstetrics & Gynecology and Community Medicine. Mean test scores of each test ranged from 36.0% to 45.8%.The reliability of the tests, the Kuder Richardson (KR) 20, ranged from 0.29 to 0.52. The standard error of Measurement ranged from 2.59 to 2.79.Out of total 200 MCQs, seventy nine (n=79) had Discrimination index (DI) <0.15 (poor), and 61 had DI ≥0.35 (excellent). Easy items having average DE of all tests was 20.1%. Conclusions: Items having average difficulty and high discrimination with functioning distractors should be incorporated into tests to improve the validity of the assessment. 


2017 ◽  
Author(s):  
Abdulaziz Alamri ◽  
Omer Abdelgadir Elfaki ◽  
Karimeldin A Salih ◽  
Suliman Al Humayed ◽  
Fatmah Mohammed Ahmad Althebat ◽  
...  

BACKGROUND Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construction. Item analysis is used to assess their quality by computing difficulty index, discrimination index, distractor efficiency and test reliability. OBJECTIVE The aim of this study was to evaluate the quality of MCQs used in the college of medicine, King Khalid University, Saudi Arabia. METHODS Design: Cross sectional Study design Setting, Materials and methods Item analysis data of 21 MCQs exams were collected. Values for difficulty index, discrimination index, distractor efficiency and reliability coefficient were entered in MS excel 2010. Descriptive statistic parameters were computed. RESULTS Twenty one tests were analyzed. Overall, 7% of the items among all the tests were difficult, 35% were easy and 58% were acceptable. The mean difficulty of all the tests was in the acceptable range of 0.3-0.85. Items with acceptable discrimination index among all tests were 39%-98%. Negatively discriminating items were identified in all tests except one. All distractors were functioning in 5%-48%. The mean functioning distractors ranged from 0.77 to 2.25. The KR-20 scores lie between 0.47 and 0.97 CONCLUSIONS Overall, the quality of the items and tests was found to be acceptable. Some items were identified to be problematic and need to be revised. The quality of few tests of specific courses was questionable. These tests need to be revised and steps taken to improve this situation.


Sign in / Sign up

Export Citation Format

Share Document