scholarly journals Impact of distractors in item analysis of multiple choice questions

Author(s):  
Ismail Burud ◽  
Kavitha Nagandla ◽  
Puneet Agarwal

Background: Item analysis is a quality assurance of examining the performance of the individual test items that measures the validity and reliability of exams. This study was performed to evaluate the quality of the test items with respect to their performance on difficulty index (DFI), Discriminatory index (DI) and assessment of functional and non-functional distractors (FD and NFD).Methods: This study was performed on the summative examination undertaken by 113 students. The analyses include 120 one best answers (OBAs) and 360 distractors.Results: Out of the 360 distractors, 85 distractors were chosen by less than 5% with the distractor efficiency of 23.6%. About 47 (13%) items had no NFDs while 51 (14%), 30 (8.3%), and 4 (1.1%) items contained 1, 2, and 3 NFDs respectively. Majority of the items showed excellent difficulty index (50.4%, n=42) and fair discrimination (37%, n=33). The questions with excellent difficulty index and discriminatory index showed statistical significance with 1NFD and 2 NFD (p=0.03).Conclusions: The post evaluation of item performance in any exam in one of the quality assurance method of identifying the best performing item for quality question bank. The distractor efficiency gives information on the overall quality of item.

Author(s):  
Suryakar Vrushali Prabhunath ◽  
Surekha T. Nemade ◽  
Ganesh D. Ghuge

Introduction: Multiple Choice Questions (MCQs) is one of the most preferred tool of assessment in medical education as a part of formative as well as summative assessment. MCQ performance as an assessment tool can be statistically analysed by Item analysis. Thus, aim of this study is to assess the quality of MCQs by item analysis and identify the valid test items to be included in the question bank for further use. Materials and methods: Formative assessment of Ist MBBS students was carried out with 40 MCQs as a part of internal examination in Biochemistry. Item analysis was done by calculating Difficulty index (P), Discrimination index (d) and number of Non- functional distractors. Results: Difficulty index (P) of 65% (26) items was well within acceptable range, 7.5% (3) items were too difficult whereas 27.5% (11) items were in the category of too easy. Discrimination Index (d) of 70% (28) items fell in recommended category whereas 10% (4) items were with acceptable, and 20% (8) were with poor Discrimination index. Out of 120 distractors 88.33% (106) were functional distractors and 11.66% (14) were non-functional distractors. After considering difficulty index, discrimination index and distractor effectiveness, 42.5% (17) items were found ideal to be included in the question bank. Conclusion: Item analysis remains an essential tool to be practiced regularly to improve the quality of the assessment methods as well as a tool for obtaining feedback for the instructors. Key Words: Difficulty index, Discrimination index, Item analysis, Multiple choice questions, Non-functional distractors


Author(s):  
Manju K. Nair ◽  
Dawnji S. R.

Background: Carefully constructed, high quality multiple choice questions can serve as effective tools to improve standard of teaching. This item analysis was performed to find the difficulty index, discrimination index and number of non functional distractors in single best response type questions.Methods: 40 single best response type questions with four options, each carrying one mark for the correct response, was taken for item analysis. There was no negative marking. The maximum marks was 40. Based on the scores, the evaluated answer scripts were arranged with the highest score on top and the least score at the bottom. Only the upper third and lower third were included. The response to each item was entered in Microsoft excel 2010. Difficulty index, Discrimination index and number of non functional distractors per item were calculated.Results: 40 multiple choice questions and 120 distractors were analysed in this study. 72.5% items were good with a difficulty index between 30%-70%. 25% items were difficult and 2.5% items were easy. 27.5% items showed excellent discrimination between high scoring and low scoring students. One item had a negative discrimination index (-0.1). There were 9 items with non functional distractors.Conclusions: This study emphasises the need for improving the quality of multiple choice questions. Hence repeated evaluation by item analysis and modification of non functional distractors may be performed to enhance standard of teaching in Pharmacology.


Author(s):  
Durgesh Prasad Sahoo ◽  
Rakesh Singh

Background: Multiple choice questions (MCQs) or Items forms an important part to assess students in different educational streams. It is an objective mode of assessment which requires both the validity and reliability depending on the characteristics of its items i.e. difficulty index, discrimination index and distracter efficiency. To evaluate MCQs or items and build a bank of high-quality test items by assessing with difficulty index, discrimination index and distracter efficiency and also to revise/store or remove errant items based on obtained results.Methods: A preliminary examination of Third MBBS Part-1 was conducted by Department of Community Medicine undertaken for 100 students. Two separate papers with total 30 MCQs or items and 90 distractors each in both papers were analyzed and compared. Descriptive as well as inferential statistics were used to analyze the data.Results: The findings show that most of the items were falling in acceptable range of difficulty level however some items were rejected due to poor discrimination index. Overall paper I was found to be more difficult and more discriminatory, but its distractor efficiency was slightly low as compared to paper II.Conclusions: The analysis helped us in selection of quality MCQs having high discrimination and average difficulty with three functional distractors. This should be incorporated into future evaluations to improve the test score and properly discriminate among the students.


2017 ◽  
Vol 7 (1) ◽  
pp. 2-7
Author(s):  
Md Ahsan Habib ◽  
Humayun Kabir Talukder ◽  
Md Mahbubur Rahman ◽  
Shahnila Ferdousi

Multiple choice questions (MCQs) have considerable role in the preclinical medical assessment, both formative as well as summative. This cross sectional descriptive study was conducted to observe the quality of MC items (completion type) of anatomy, biochemistry and physiology used in preclinical undergraduate medical examinations of 2012 and 2013 of a public university of Bangladesh. Each MC item had a stem and 5 options, and 1200 options were analyzed for difficulty and discrimination indices. Total 556 options were false statements (distracters) and were analyzed to observe their effectiveness as distracter. The study revealed that 18.67% of options were with appropriate difficulty (0.660.80). Highest frequency (43.5%) of difficulty indices was in easy class interval (0.911). Over all frequencies of items of three subjects in the ascending order were difficult, appropriate, marginal and easy as per their difficulty indices. Satisfactory or better discrimination indices (=0.20) were observed in 29.33% options. The mean difficulty and discrimination indices observed were respectively 0.82±0.18 (95% confidence interval [CI] 0.81 to 0.83) and 0.13±0.14 (95% CI 0.122 to 0.138). Out of the options, 6.75% had negative discrimination indices. Items with difficulty index around 0.60 had maximum discriminatory power (up to 0.68) and more difficult as well as easy items had less discriminatory ability. Out of the distracters 83.45% were observed effective and the mean effectiveness was 22.3±18.7% (95% CI 20.75% to 23.85%). The study recommended using the method and findings to improve the quality of the items leading to development of a standard Question Bank.Bangladesh Journal of Medical Education Vol.7(1) 2016: 2-7


Author(s):  
Abhijeet S. Ingale ◽  
Purushottam A. Giri ◽  
Mohan K. Doibale

Background: Item analysis is the process of collecting, summarizing and using information from students’ response to assess the quality of test items. However it is said that MCQs emphasize recall of factual information rather than conceptual understanding and interpretation of concepts. There is more to writing good MCQs than writing good questions. The objectives of the study was to assess the item and test quality of multiple choice questions and to deal with the learning difficulties of students, identify the low achievers in the test. Methods: The hundred MBBS students from Government medical college were examined. A test comprising of thirty MCQs was administered. All items were analysed for Difficulty Index, Discrimination Index and Distractor Efficiency. Data entered in MS Excel 2007 and SPSS 21 analysed with statistical test of significance. Results: Majority 80% items difficulty index is within acceptable range. 63% items showed excellent discrimination Index. Distractor efficiency was overall satisfactory. Conclusions: Multiple choice questions with average difficulty and also having high discriminating power with good distracter efficiency should be incorporated into student’s examination. 


2019 ◽  
Author(s):  
Assad Ali Rezigalla ◽  
Elwathiq Khalid Ibrahim ◽  
Amar Babiker ElHussein

Abstract Background Distractor efficiency of multiple choice item responses is a component of item analysis used by the examiners to to evaluate the credibility and functionality of the distractors.Objective To evaluate the impact of functionality (efficiency) of the distractors on difficulty and discrimination indices.Methods A cross-sectional study in which standard item analysis of an 80-item test consisted of A type MCQs was performed. Correlation and significance of variance among Difficulty index (DIF), discrimination index (DI), and distractor Efficiency (DE) were measured.Results There is a significant moderate positive correlation between difficulty index and distractor efficiency, which means there is a tendency for high difficulty index go with high distractor efficiency (and vice versa). A weak positive correlation between distractor efficiency and discrimination index.Conclusions Non-functional distractor can reduce discrimination power of multiple choice questions. More training and effort for construction of plausible options of MCQ items is essential for the validity and reliability of the tests.


2020 ◽  
Vol 10 (2) ◽  
Author(s):  
Imtiaz Uddin ◽  
Iftikhar Uddin ◽  
Izaz Ur Rehman ◽  
Muhammad Siyar ◽  
Usman Mehboob

Background: MCQs type assessment in medical education is replacing old theory style. There are concerns regarding the quality of the Multiple Choice Questions.Objectives: To determine the quality of Multiple Choice Questions by item analysis. Material and Methods: Study was a cross sectional descriptive .Fifty Multiple Choice Questions in the final internal evaluation exams in 2015 of Pharmacology at Bacha khan Medical College were analyzed. The quality of each Multiple Choice Questions item was assessed by the Difficulty index (Dif.I), Discriminative Index (D.I) and Distracter Efficiency (D.E).Results: Multiple Choice Questions that were of moderate difficulty were 66%. Easy were 4% and high difficulty were 30%.Reasons for high difficult Multiple Choice Questions were analyzed as Item Writing Flaws 41%, Irreverent Difficulty 36% and C2 level 23%. Discrimination Index shows that majority of MCQs were of Excellent Level (DI greater than 0.25) i.e 52 , Good 32% . (DI=2.15-0.25), Poor 16%. MCQs Distracter Effectiveness (DE)= 4, 3,2,1 were 52%, 34%, 14%, and 0% respectively. Conclusion: Item analysis gives us different parameters with reasons to recheck MCQ pool and teaching programme. High proportions of difficult and sizable amount of poor discriminative indices MCQs were the finding in this study and need to be resolved


2020 ◽  
Vol 27 (12) ◽  
pp. 2749-2754
Author(s):  
Anila Jaleel ◽  
Zohra Khanum

Objectives: To evaluate the quality of MCQs and SEQs pre and post Mock examination of physiology and biochemistry and correlation between the scores of both at private medical college Lahore. Study Design: Descriptive study and convenient sampling was done. Setting: Fatima Memorial College of Medicine and Dentistry, Lahore. Period: September 2016 to September 2017. Material & Methods: 149 students in Physiology and 143 in biochemistry took Mock examination. 45 MCQs and 9 SEQs each in biochemistry and physiology were prepared according to the table of specification provided by University of Health Sciences (UHS) Lahore, Pakistan. Item assessment according to Blooms taxonomy was done and item flaws identified with cover test done for structural validity of the paper before the exam by two medical educationists. Item analysis with difficulty index, discrimination index and distraction efficiency were done post examination. Results: 84.4% in physiology and 51.1% in biochemistry were of recall type (C1 level according to Blooms taxonomy), and 58% SEQs in physiology and 50% in biochemistry were C1. 20% and 28% MCQs in physiology and biochemistry respectively pass cover test and were without item writing flaws. Difficulty index shows that 53.3% in physiology and 48.8% in biochemistry needs modifications. 48.8% of MCQs in physiology and 15.5% in biochemistry need modification in discriminatory index. Similarly 59.5% and 64.0 % of MCQs had functional distractors in physiology and biochemistry respectively. Conclusion: The study concluded that Mock examination in subjects of physiology and biochemistry had item writing flaws, more MCQs were of C1 level and showed majority items in good difficulty, discriminatory index with two third functional distractors were present


2017 ◽  
Author(s):  
Abdulaziz Alamri ◽  
Omer Abdelgadir Elfaki ◽  
Karimeldin A Salih ◽  
Suliman Al Humayed ◽  
Fatmah Mohammed Ahmad Althebat ◽  
...  

BACKGROUND Multiple choice questions represent one of the commonest methods of assessment in medical education. They believed to be reliable and efficient. Their quality depends on good item construction. Item analysis is used to assess their quality by computing difficulty index, discrimination index, distractor efficiency and test reliability. OBJECTIVE The aim of this study was to evaluate the quality of MCQs used in the college of medicine, King Khalid University, Saudi Arabia. METHODS Design: Cross sectional Study design Setting, Materials and methods Item analysis data of 21 MCQs exams were collected. Values for difficulty index, discrimination index, distractor efficiency and reliability coefficient were entered in MS excel 2010. Descriptive statistic parameters were computed. RESULTS Twenty one tests were analyzed. Overall, 7% of the items among all the tests were difficult, 35% were easy and 58% were acceptable. The mean difficulty of all the tests was in the acceptable range of 0.3-0.85. Items with acceptable discrimination index among all tests were 39%-98%. Negatively discriminating items were identified in all tests except one. All distractors were functioning in 5%-48%. The mean functioning distractors ranged from 0.77 to 2.25. The KR-20 scores lie between 0.47 and 0.97 CONCLUSIONS Overall, the quality of the items and tests was found to be acceptable. Some items were identified to be problematic and need to be revised. The quality of few tests of specific courses was questionable. These tests need to be revised and steps taken to improve this situation.


Author(s):  
Amani H. Elgadal ◽  
Abdalbasit A. Mariod

Background: Integration of assessment with education is vital and ought to be performed regularly to enhance learning. There are many assessment methods like Multiple-choice Questions, Objective Structured Clinical Examination, Objective Structured Practical Examination, etc. The selection of the appropriate method is based on the curricula blueprint and the target competencies. Although MCQs has the capacity to test students’ higher cognition, critical appraising, problem-solving, data interpretation, and testing curricular contents in a short time, there are constraints in its analysis. The authors aim to accentuate some consequential points about psychometric analysis displaying its roles, assessing its validity and reliability in discriminating the examinee’s performance, and impart some guide to the faculty members when constructing their exam questions bank. Methods: Databases such as Google Scholar and PubMed were searched for freely accessible English articles published since 2010. Synonyms and keywords were used in the search. First, the abstracts of the articles were viewed and read to select suitable match, then full articles were perused and summarized. Finally, recapitulation of the relevant data was done to the best of the authors’ knowledge. Results: The searched articles showed the capacity of MCQs item analysis in assessing questions’ validity, reliability, its capacity in discriminating against the examinee’s performance and correct technical flaws for question bank construction. Conclusion: Item analysis is a statistical tool used to assess students’ performance on a test, identify underperformed items, and determine the root causes of this underperformance for improvement to ensure effective and accurate students’ competency judgment. Keywords: assessment, difficulty index, discrimination index, distractors, MCQ item analysis


Sign in / Sign up

Export Citation Format

Share Document