scholarly journals Item Analysis: The impact of distractor efficiency on the discrimination power of multiple choice items

2019 ◽  
Author(s):  
Assad Ali Rezigalla ◽  
Elwathiq Khalid Ibrahim ◽  
Amar Babiker ElHussein

Abstract Background Distractor efficiency of multiple choice item responses is a component of item analysis used by the examiners to to evaluate the credibility and functionality of the distractors.Objective To evaluate the impact of functionality (efficiency) of the distractors on difficulty and discrimination indices.Methods A cross-sectional study in which standard item analysis of an 80-item test consisted of A type MCQs was performed. Correlation and significance of variance among Difficulty index (DIF), discrimination index (DI), and distractor Efficiency (DE) were measured.Results There is a significant moderate positive correlation between difficulty index and distractor efficiency, which means there is a tendency for high difficulty index go with high distractor efficiency (and vice versa). A weak positive correlation between distractor efficiency and discrimination index.Conclusions Non-functional distractor can reduce discrimination power of multiple choice questions. More training and effort for construction of plausible options of MCQ items is essential for the validity and reliability of the tests.

Author(s):  
Ajeet Kumar Khilnani ◽  
Rekha Thaddanee ◽  
Gurudas Khilnani

<p class="abstract"><strong>Background:</strong> Multiple choice questions (MCQs) are routinely used for formative and summative assessment in medical education. Item analysis is a process of post validation of MCQ tests, whereby items are analyzed for difficulty index, discrimination index and distractor efficiency, to obtain a range of items of varying difficulty and discrimination indices. This study was done to understand the process of item analysis and analyze MCQ test so that a valid and reliable MCQ bank in otorhinolaryngology is developed.</p><p class="abstract"><strong>Methods:</strong> 158 students of 7<sup>th</sup> Semester were given an 8 item MCQ test. Based on the marks achieved, the high achievers (top 33%, 52 students) and low achievers (bottom 33%, 52 students) were included in the study. The responses were tabulated in Microsoft Excel Sheet and analyzed for difficulty index, discrimination index and distractor efficiency.  </p><p class="abstract"><strong>Results:</strong> The mean (SD) difficulty index (Diff-I) of 8 item test was 61.41% (11.81%). 5 items had a very good difficulty index (41% to 60%), while 3 items were easy (Diff-I &gt;60%). There was no item with Diff-I &lt;30%, i.e. a difficult item, in this test. The mean (SD) discrimination index (DI) of the test was 0.48 (0.15), and all items had very good discrimination indices of more than 0.25. Out of 24 distractors, 6 (25%) were non-functional distractors (NFDs). The mean (SD) distractor efficiency (DE) of the test was 74.62% (23.79%).</p><p class="abstract"><strong>Conclusions:</strong> Item analysis should be an integral and regular activity in each department so that a valid and reliable MCQ question bank is developed.</p>


Author(s):  
Amit P. Date ◽  
Archana S. Borkar ◽  
Rupesh T. Badwaik ◽  
Riaz A. Siddiqui ◽  
Tanaji R. Shende ◽  
...  

Background: Multiple choice questions (MCQs) are a common method for formative and summative assessment of medical students. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), distracter efficiency (DE). The objective of this study was to assess the quality of MCQs currently in use in pharmacology by item analysis and develop a MCQ bank with quality items.Methods: This cross-sectional study was conducted in 148 second year MBBS students at NKP Salve institute of medical sciences from January 2018 to August 2018. Forty MCQs twenty each from the two term examination of pharmacology were taken for item analysis A correct response to an item was awarded one mark and each incorrect response was awarded zero. Each item was analyzed using Microsoft excel sheet for three parameters such as DIF I, DI, and DE.Results: In present study mean and standard deviation (SD) for Difficulty index (%) Discrimination index (%) and Distractor efficiency (%) were 64.54±19.63, 0.26±0.16 and 66.54±34.59 respectively. Out of 40 items large number of MCQs has acceptable level of DIF (70%) and good in discriminating higher and lower ability students DI (77.5%). Distractor efficiency related to presence of zero or 1 non-functional distrator (NFD) is 80%.Conclusions: The study showed that item analysis is a valid tool to identify quality items which regularly incorporated can help to develop a very useful, valid and a reliable question bank.


2018 ◽  
Vol 18 (1) ◽  
pp. 68 ◽  
Author(s):  
Deena Kheyami ◽  
Ahmed Jaradat ◽  
Tareq Al-Shibani ◽  
Fuad A. Ali

Objectives: The current study aimed to carry out a post-validation item analysis of multiple choice questions (MCQs) in medical examinations in order to evaluate correlations between item difficulty, item discrimination and distraction effectiveness so as to determine whether questions should be included, modified or discarded. In addition, the optimal number of options per MCQ was analysed. Methods: This cross-sectional study was performed in the Department of Paediatrics, Arabian Gulf University, Manama, Bahrain. A total of 800 MCQs and 4,000 distractors were analysed between November 2013 and June 2016. Results: The mean difficulty index ranged from 36.70–73.14%. The mean discrimination index ranged from 0.20–0.34. The mean distractor efficiency ranged from 66.50–90.00%. Of the items, 48.4%, 35.3%, 11.4%, 3.9% and 1.1% had zero, one, two, three and four nonfunctional distractors (NFDs), respectively. Using three or four rather than five options in each MCQ resulted in 95% or 83.6% of items having zero NFDs, respectively. The distractor efficiency was 91.87%, 85.83% and 64.13% for difficult, acceptable and easy items, respectively (P <0.005). Distractor efficiency was 83.33%, 83.24% and 77.56% for items with excellent, acceptable and poor discrimination, respectively (P <0.005). The average Kuder-Richardson formula 20 reliability coefficient was 0.76. Conclusion: A considerable number of the MCQ items were within acceptable ranges. However, some items needed to be discarded or revised. Using three or four rather than five options in MCQs is recommended to reduce the number of NFDs and improve the overall quality of the examination.


2021 ◽  
Vol 20 (2) ◽  
Author(s):  
Siti Khadijah Adam ◽  
Faridah Idris ◽  
Puteri Shanaz Jahn Kassim ◽  
Nor Fadhlina Zakaria ◽  
Rafidah Hod

Background: Multiple-choice questions (MCQs) are used for measuring the student’s progress, and they should be analyzed properly to guarantee the item’s appropriateness. The analysis usually determines three indices of an item; difficulty or passing index (PI), discrimination index (DI), and distractor efficiency (DE). Objectives: This study was aimed to analyze the multiple-choice questions in the preclinical and clinical examinations with different numbers of options in medical program of Universiti Putra Malaysia. Methods: This is a cross-sectional study. Forty multiple-choice questions with four options from the preclinical examination and 80 multiple-choice questions with five options from the clinical examination in 2017 and 2018 were analyzed using optical mark recognition machine and Ms. Excel. The parameters included PI, DI, and DE. Results: The average difficulty level of multiple-choice questions for preclinical and clinical phase examinations were similar in 2017 and 2018 that were considered ‘acceptable’ and ‘ideal’ ranged from 0.55 to 0.60, respectively. The average DIs were similar in all examinations that were considered ‘good’ (ranged from 0.25 to 0.31) except in 2018 clinical phase examination that showed ‘poor’ items (DI = 0.20 ± 0.11). The questions for preclinical phase showed an increase in the number of ‘excellent’ and ‘good’ items in 2018 from 37.5% to 70.0%. There was an increase of 10.0% for preclinical phase, and 6.25% for clinical phase, in the number of items with no non-functioning distractors in 2018. Among all, preclinical multiple-choice questions in 2018 showed the highest mean of DE (71.67%). Conclusions: Our findings suggested that there was an improvement in the questions from preclinical phase while more training on questions preparation and continuous feedback should be given to clinical phase teachers. A higher number of options did not affect the level of difficulty of a question; however, the discrimination power and distractors efficiency might differ.


2020 ◽  
Vol 19 (1) ◽  
Author(s):  
Surajit Kundu ◽  
Jaideo M Ughade ◽  
Anil R Sherke ◽  
Yogita Kanwar ◽  
Samta Tiwari ◽  
...  

Background: Multiple-choice questions (MCQs) are the most frequently accepted tool for the evaluation of comprehension, knowledge, and application among medical students. In single best response MCQs (items), a high order of cognition of students can be assessed. It is essential to develop valid and reliable MCQs, as flawed items will interfere with the unbiased assessment. The present paper gives an attempt to discuss the art of framing well-structured items taking kind help from the provided references. This article puts forth a practice for committed medical educators to uplift the skill of forming quality MCQs by enhanced Faculty Development programs (FDPs). Objectives: The objective of the study is also to test the quality of MCQs by item analysis. Methods: In this study, 100 MCQs of set I or set II were distributed to 200 MBBS students of Late Shri Lakhiram Agrawal Memorial Govt. Medical College Raigarh (CG) for item analysis for quality MCQs. Set I and Set II were MCQs which were formed by 60 medical faculty before and after FDP, respectively. All MCQs had a single stem with three wrong and one correct answers. The data were entered in Microsoft excel 2016 software to analyze. The difficulty index (Dif I), discrimination index (DI), and distractor efficiency (DE) were the item analysis parameters used to evaluate the impact on adhering to the guidelines for framing MCQs. Results: The mean calculated difficulty index, discrimination index, and distractor efficiency were 56.54%, 0.26, and 89.93%, respectively. Among 100 items, 14 items were of higher difficulty level (DIF I < 30%), 70 were of moderate category, and 16 items were of easy level (DIF I > 60%). A total of 10 items had very good DI (0.40), 32 had recommended values (0.30 - 0.39), and 25 were acceptable with changes (0.20 - 0.29). Of the 100 MCQs, there were 27 MCQs with DE of 66.66% and 11 MCQs with DE of 33.33%. Conclusions: In this study, higher cognitive-domain MCQs increased after training, recurrent-type MCQ decreased, and MCQ with item writing flaws reduced, therefore making our results much more statistically significant. We had nine MCQs that satisfied all the criteria of item analysis.


2020 ◽  
Vol 10 (2) ◽  
Author(s):  
Imtiaz Uddin ◽  
Iftikhar Uddin ◽  
Izaz Ur Rehman ◽  
Muhammad Siyar ◽  
Usman Mehboob

Background: MCQs type assessment in medical education is replacing old theory style. There are concerns regarding the quality of the Multiple Choice Questions.Objectives: To determine the quality of Multiple Choice Questions by item analysis. Material and Methods: Study was a cross sectional descriptive .Fifty Multiple Choice Questions in the final internal evaluation exams in 2015 of Pharmacology at Bacha khan Medical College were analyzed. The quality of each Multiple Choice Questions item was assessed by the Difficulty index (Dif.I), Discriminative Index (D.I) and Distracter Efficiency (D.E).Results: Multiple Choice Questions that were of moderate difficulty were 66%. Easy were 4% and high difficulty were 30%.Reasons for high difficult Multiple Choice Questions were analyzed as Item Writing Flaws 41%, Irreverent Difficulty 36% and C2 level 23%. Discrimination Index shows that majority of MCQs were of Excellent Level (DI greater than 0.25) i.e 52 , Good 32% . (DI=2.15-0.25), Poor 16%. MCQs Distracter Effectiveness (DE)= 4, 3,2,1 were 52%, 34%, 14%, and 0% respectively. Conclusion: Item analysis gives us different parameters with reasons to recheck MCQ pool and teaching programme. High proportions of difficult and sizable amount of poor discriminative indices MCQs were the finding in this study and need to be resolved


2017 ◽  
Author(s):  
Abdulaziz Alamri ◽  
Omer Abdelgadir Elfaki ◽  
Karimeldin A Salih ◽  
Suliman Al Humayed ◽  
Fatmah Mohammed Ahmad Althebat ◽  
...  

BACKGROUND Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construction. Item analysis is used to assess their quality by computing difficulty index, discrimination index, distractor efficiency and test reliability. OBJECTIVE The aim of this study was to evaluate the quality of MCQs used in the college of medicine, King Khalid University, Saudi Arabia. METHODS Design: Cross sectional Study design Setting, Materials and methods Item analysis data of 21 MCQs exams were collected. Values for difficulty index, discrimination index, distractor efficiency and reliability coefficient were entered in MS excel 2010. Descriptive statistic parameters were computed. RESULTS Twenty one tests were analyzed. Overall, 7% of the items among all the tests were difficult, 35% were easy and 58% were acceptable. The mean difficulty of all the tests was in the acceptable range of 0.3-0.85. Items with acceptable discrimination index among all tests were 39%-98%. Negatively discriminating items were identified in all tests except one. All distractors were functioning in 5%-48%. The mean functioning distractors ranged from 0.77 to 2.25. The KR-20 scores lie between 0.47 and 0.97 CONCLUSIONS Overall, the quality of the items and tests was found to be acceptable. Some items were identified to be problematic and need to be revised. The quality of few tests of specific courses was questionable. These tests need to be revised and steps taken to improve this situation.


Author(s):  
Amani H. Elgadal ◽  
Abdalbasit A. Mariod

Background: Integration of assessment with education is vital and ought to be performed regularly to enhance learning. There are many assessment methods like Multiple-choice Questions, Objective Structured Clinical Examination, Objective Structured Practical Examination, etc. The selection of the appropriate method is based on the curricula blueprint and the target competencies. Although MCQs has the capacity to test students’ higher cognition, critical appraising, problem-solving, data interpretation, and testing curricular contents in a short time, there are constraints in its analysis. The authors aim to accentuate some consequential points about psychometric analysis displaying its roles, assessing its validity and reliability in discriminating the examinee’s performance, and impart some guide to the faculty members when constructing their exam questions bank. Methods: Databases such as Google Scholar and PubMed were searched for freely accessible English articles published since 2010. Synonyms and keywords were used in the search. First, the abstracts of the articles were viewed and read to select suitable match, then full articles were perused and summarized. Finally, recapitulation of the relevant data was done to the best of the authors’ knowledge. Results: The searched articles showed the capacity of MCQs item analysis in assessing questions’ validity, reliability, its capacity in discriminating against the examinee’s performance and correct technical flaws for question bank construction. Conclusion: Item analysis is a statistical tool used to assess students’ performance on a test, identify underperformed items, and determine the root causes of this underperformance for improvement to ensure effective and accurate students’ competency judgment. Keywords: assessment, difficulty index, discrimination index, distractors, MCQ item analysis


2021 ◽  
Vol 71 (4) ◽  
pp. 1308-10
Author(s):  
Musarat Ramzan ◽  
Khola Waheed Khan ◽  
Saana Bibi ◽  
Shezadi Sabah Imran

Objective: To perform post analysis of multiple-choice questions given in the 2nd term and send up examinations of the years 2016 to 2018, to establish relationship between difficulty (DF) and discrimination indices (DI) and to find out significant mean difference between the two. Study Design: Cross sectional study. Place and Duration of Study: Community Medicine Department, Wah Medical College, Wah, from Nov 2018 to Mar 2019. Methodology: A total of 390 Multiple-Choice Question of second term and send-up were taken for the study from the year 2016, 2017 and 2018. The response sheets were assessed by Optical Machine Reader (OMR) and the level of difficulty, power of discrimination and reliability were obtained. The data was entered in SPSS version 22. Results: A total of 315 test items were included. Results of the study showed that the reliability (KR20) for all the examined items was in the acceptable range i.e. ≥0.7 and there was no association was found between difficulty index and year p=0.310. The mean difficulty index was found to be 0.48 ± 0.22 and discrimination index as 0.24 ± 0.14. Conclusion: The analysis of 390 test items showed that most of the questions were acceptable in terms of difficulty and discrimination. There is still a need to modify and improve the testing ability of the MCQs with negative discrimination and higher difficulty index.


2015 ◽  
Vol 05 (04) ◽  
pp. 058-061
Author(s):  
Sajitha K. ◽  
Harish S. Permi ◽  
Chandrika Rao ◽  
Kishan Prasad H. L.

Abstract Background:Multiple choice questions (MCQ) are used in the assessment of students in various fields. By this method of assessment it is possible to cover a wide range of topics in less amount of time. However the reliability of the test depends on the quality of the MCQ. The MCQ can be evaluated based on the Difficulty Index (DIFI), Discriminatory Index (Dl) and Distracter Efficiency (DE). Objectives:To evaluate the MCQs based on the Difficulty Index (DIFI), Discriminatory Index (Dl) & Distracter Efficiency (DE) and develop a valid pool of questions.Also to assess learner performance and discriminate between students of higher and lower abilities. Materials and Methods:A total of 120 students were assessed based on multiple choice questions in pathology. The number of items were 20 and the number of distracters were 60. Data was entered and analyzed in MS Excel 2007 and simple proportions, mean and standard deviations were calculated. Results:Mean and standard deviations for DIFI, Dl and DE were 57.8 ± 17.4%, 0.27 ± 0.17 and 84.98 ± 20.2 % respectively. Out of the 20 items, 11 items had good level of DIFI (31 - 60%), eight (8) items were considered easy (DIFI &gt;61%) and one (1) item was considered difficult (DIFI &lt; 30). Mean Dl in present study was 0.27 ± 0.17. Analysis of the Dl showed good discrimination power in eighteen (18) of the items. Out of the 60 distracters, nine (9) were non -functional distracters (NFD) and were seen in eight items. Out of these, seven items had one NFD each and one item had two NFD. Conclusions:The study emphasizes on the importance of use of item analysis in construction of good quality MCQs and also in the evaluation of learner performance.


Sign in / Sign up

Export Citation Format

Share Document