scholarly journals Item Analysis of Multiple Choice Questions in Pharmacology

2020 ◽  
Vol 10 (2) ◽  
Author(s):  
Imtiaz Uddin ◽  
Iftikhar Uddin ◽  
Izaz Ur Rehman ◽  
Muhammad Siyar ◽  
Usman Mehboob

Background: MCQs type assessment in medical education is replacing old theory style. There are concerns regarding the quality of the Multiple Choice Questions.Objectives: To determine the quality of Multiple Choice Questions by item analysis. Material and Methods: Study was a cross sectional descriptive .Fifty Multiple Choice Questions in the final internal evaluation exams in 2015 of Pharmacology at Bacha khan Medical College were analyzed. The quality of each Multiple Choice Questions item was assessed by the Difficulty index (Dif.I), Discriminative Index (D.I) and Distracter Efficiency (D.E).Results: Multiple Choice Questions that were of moderate difficulty were 66%. Easy were 4% and high difficulty were 30%.Reasons for high difficult Multiple Choice Questions were analyzed as Item Writing Flaws 41%, Irreverent Difficulty 36% and C2 level 23%. Discrimination Index shows that majority of MCQs were of Excellent Level (DI greater than 0.25) i.e 52 , Good 32% . (DI=2.15-0.25), Poor 16%. MCQs Distracter Effectiveness (DE)= 4, 3,2,1 were 52%, 34%, 14%, and 0% respectively. Conclusion: Item analysis gives us different parameters with reasons to recheck MCQ pool and teaching programme. High proportions of difficult and sizable amount of poor discriminative indices MCQs were the finding in this study and need to be resolved

2017 ◽  
Vol 24 (09) ◽  
pp. 1409-1414
Author(s):  
Muhammad Zafar Iqbal ◽  
Shumaila Irum ◽  
Muhammad Sohaib Yousaf

Objectives: The main objective of this study was to judge the quality of MCQs interms of their cognition level and item writing flaws, developed by the faculty of a public sectormedical college. Setting: This study was conducted in Sheikh Zayed Medical College, RahimYar Khan. Duration with Dates: Data was collected between June 2014 to March 2015 andthis study was completed in July 2016. Sample Size: A sample of 500 MCQs collected from25 faculty members were included in the study. Study Design: Quantitative method. StudyType: Cross sectional descriptive analysis. Material and Methods: This quantitative study wasconducted in Sheikh Zayed Medical College Rahim Yar Khan over six months period after theapproval of the study proposal. Every faculty member is supposed to write 25 MCQs in order tobecome supervisor. I collected 500 multiple choice questions from 25 faculty members readyfor submission to CPSP. The quality of all MCQs was checked in terms of item writing flawsand cognition level by panel of experts. Results: Absolute terms were observed in 10(2%),vague terms in 15(3%), implausible distracters in 75(15%), extra detail in correct option 15(3%),unfocused stem 63(12.6%), grammatical clues 39(7.8%), logical clues 18(3.6%), word repeats19(3.8%), >then one correct answer 21(4.2%), unnecessary information in stem 37(7.4%),lost sequence in data 15(3%), all of above16(3.2%), none of above 12(2.4%) and negativestem 23(4.6%). Cognition level l (recall) was observed in 363(72.6%), level ll (interpretation) in115(23%) and level lll (problem solving) in 22(4.4%) items. Total 378(75.6%) flaws were identifiedand four commonest flaws were implausible distracter 75(15%), unfocused stem 63(12.6%),grammatical clues 39(7.8%) and unnecessary information in stem 37(7.4%). Conclusion: It isconcluded that assessment of medical students is very demanding and need of the time. A wellconstructed,peer-reviewed single best type MCQ is best one to complete this task becauseof cost effectiveness, better reliability and computerized marking. It is very important to startfaculty development program in order to decrease the number of item writing flaws and improvecognition level towards problem solving and application of knowledge.


2020 ◽  
Vol 36 (5) ◽  
Author(s):  
Madiha Sajjad ◽  
Samina Iltaf ◽  
Rehan Ahmed Khan

Objectives: To analyze the low to medium distractor efficiency items in a multiple-choice question (MCQ) paper for item writing flaws. Methods: This qualitative study was conducted at Islamic International Medical College Rawalpindi, in October 2019. Archived item- analysis report from a midyear medium stakes MCQ paper of 2nd year MBBS class, was analyzed to determine the non-functional distractors (NFDs) and distractor efficiency (DE) of items, in a total of 181 MCQs. DE was categorized as low (3-4 NFDs), medium (1-2 NFDs) and high (0 NFD). Subsequently, qualitative document analysis of the MCQ paper whose item analysis report was assessed was conducted to investigate the item flaws in the low to medium DE items. The flaws identified were coded and grouped as, within option flaws, alignment flaws between options and stem/ lead-in and other flaws. Results: Distractor efficiency was high in 69 items (38%), moderate in 75 items (42%) and low in 37 items (20%). The item-writing flaws identified in low to moderate DE items within distractors included, non-homogenous length (1.8%), non-homogenous content (8%) and repeat in distractor (1.7%). Alignment flaws between distractors and stem/ lead-in identified were linguistic cues (10%), logic cues (12.5%) and irrelevant distractors (16%). Flaws unrelated to distractors were low cognitive level items (40%) and unnecessarily complicated stems (11.6%). Conclusions: Analyzing the low to medium DE items for item writing flaws, provides valuable information about item writing errors which negatively impact the distractor efficiency. doi: https://doi.org/10.12669/pjms.36.5.2439 How to cite this:Sajjad M, Iltaf S, Khan RA. Nonfunctional distractor analysis: An indicator for quality of Multiple choice questions. Pak J Med Sci. 2020;36(5):---------. doi: https://doi.org/10.12669/pjms.36.5.2439 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


Author(s):  
Richa Garg ◽  
Vikas Kumar ◽  
Jyoti Maria

Background: Assessment is a dominant motivator to direct and drive students learning. Different methods of assessment are used to assess medical knowledge in undergraduate medical education. Multiple choice questions (MCQs) are being used increasingly due to their higher reliability, validity, and ease of scoring. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), and distracter efficiency (DE).Methods: Students of second year MBBS appeared in a formative assessment test, that was comprised of 50 “One best response type” MCQs of 50 marks without negative marking. All MCQs were having single stem with four options including, one being correct answer and other three incorrect alternatives (distracter). Three question paper sets were prepared by disorganizing sequence of questions. One of the three paper sets was given to each student to avoid copying from neighboring students. Total 50 MCQs and 150 distracters were analyzed and indices like DIF I, DI, and DE were calculated.Results: Total Score of 87 students ranged from 17 to 48 (out of total 50). Mean for difficulty index (DIF I) (%) was 71.6+19.4. 28% MCQs were average and “recommended” (DIF I 30-70%). Mean for discrimination index (DI) was 0.3+0.17. 16% MCQs were “good” and 50% MCQs were in “excellent” criteria, while rests of the MCQs were “discard/poor” according to DI criteria. Mean for distracter efficiency (DE) (%) was 63.4+33.3. 90% of the items were having DE from 100 to 33%. It was found that MCQs with lower difficulty index (<70) were having higher distracter efficiency (93.8% vs. 6.2%, p=0.004).Conclusions: Item analysis provided necessary data for improvement in question formulation and helped in revising and improving the quality of items and test also. Questions having lower difficulty index (<70) were significantly associated with higher discrimination index (>0.15) and higher distractor efficiency.


2020 ◽  
Vol 1 (2) ◽  
pp. 4
Author(s):  
Musarat Ramzan ◽  
Shezadi Sabah Imran ◽  
Sana Bibi ◽  
Khola Waheed Khan ◽  
Imrana Maqsood

Objective: The objective of the study was to assess the quality of multiple-choice questions (MCQs) of three different assessments in the subject of Community Medicine by computing the difficulty index, discrimination index and reliability and to estimate the relationship between difficulty and discrimination indices. Study Design: Retrospective observational study. Place and Duration of Study: Department of Community Medicine at Wah Medical College from August to December 2018. Materials and Methods: Three sets of MCQs were included in the study. Mean and standard deviation of difficulty and discrimination indices were calculated and one-way analysis of variance and Kruskal Wallis test were applied on difficulty and discrimination indices. The association was determined by Pearson correlation and considered significant at p value of < 0.05. Results: The mean difficulty index of first term, second term and send-up examination were 41.5, 48.8 and 51.9 respectively. Mean discrimination indices were 0.28, 0.27 and 0.26 and reliability were 0.83, 0.81 and 0.79. In the study, 72% MCQs of the first term, 61.5 % of the second term and 63% of the send-up examinations were in the range 30-70% of difficulty. There was a significant difference in the difficulty indices of the three question papers. The correlation between discrimination and difficulty indices was curvilinear and positively correlated. Conclusion: It is concluded that all three question papers have acceptable reliability, more than 65% MCQs have acceptable difficulty index and about 69% have good discriminatory power.


Author(s):  
Vijaya K. Suryadevara ◽  
Zaheda Bano

Background: In medical education, multiple choice questions/Items are the more frequently used assessment tools to assess the knowledge abilities and skills of medical students, for being their objectivity, wide coverage in less time. However only the Quality Items gives a valid and reliable assessment. The quality of an Item is determined by difficulty index (DIF I), Discrimination Index (DI) and Distractor efficiency (DE). Aim of the study was to know the quality of Items in pharmacology by Item analysis and to develop a MCQs bank with quality Items.Methods: The present study was conducted on 150 II MBBS students of Guntur Medical College, AP, India. A class test containing 50 Items with 150 distractors from topic chemotherapy was conducted. Item with the correct choice/response was awarded with one mark and with the wrong choice zero marks, no negative marks. Each test Item was analysed with DIF I, DI and DE and the results were tabulated and tested statistically, with unpaired "t" test.Results: Mean DIF I, DI, DE values with standard deviations in the present study are 44.72+17.63%, 0.30+0.12%, 84.48+24.65 respectively. DIF I of 32 (64%) items was good to excellent range (31%-60%) 9 (18%) Items were easy (>61%) and 9(18%) Items were difficult (>30%). DI of 10 (20%) Items was good (0.15 to 0.24.) 29 (58%) Items were excellent with DI > 0.25 and 11 (22%) Items were poor with DI <0.15. Among 150 distractors, 127 (85%) were functional distractors (FDs) and 23 (15%) were non-functional distractors (NFDs). DE of 33 (66%) items with nil NFDs was 100%, for 12 (24%) Items with one NFD, was 66.6%, for 4 (8%) items with 2 NFDs was 33.3% and for 1 (2%) Item with 3NFDs DE was 0%. When unpaired "t" test was applied to the means of "difficult" and "easy" Items, 96.22+11.33% SD, 51.44+29.31% SD respectively, the p-value obtained was 0.00058, which was highly significant.Conclusions: The study showed that Item analysis is a valid tool to identify quality Items, which assess, the students’ knowledge abilities and discriminate different levels of performance abilities of students effectively.


Author(s):  
Netravathi B. Angadi ◽  
Amitha Nagabhushana ◽  
Nayana K. Hashilkar

Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and Distractor efficiency (DE). Item analysis is a valuable yet relatively simple procedure, performed after the examination that provides information regarding the reliability and validity of a test item. The objective of this study was to perform an item analysis of MCQs for testing their validity parameters.Methods: 50 items consisting of 150 distractors were selected from the formative exams. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analysed for three parameters such as DIF I, DI, and DE.Results: A total of 50 items consisting of 150 Distractor s were analysed. DIF I of 31 (62%) items were in the acceptable range (DIF I= 30-70%) and 30 had ‘good to excellent’ (DI >0.25). 10 (20%) items were too easy and 9 (18%) items were too difficult (DIF I <30%). There were 4 items with 6 non-functional Distractor s (NFDs), while the rest 46 items did not have any NFDs.Conclusions: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard or modify the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity.


Author(s):  
Ajeet Kumar Khilnani ◽  
Rekha Thaddanee ◽  
Gurudas Khilnani

<p class="abstract"><strong>Background:</strong> Multiple choice questions (MCQs) are routinely used for formative and summative assessment in medical education. Item analysis is a process of post validation of MCQ tests, whereby items are analyzed for difficulty index, discrimination index and distractor efficiency, to obtain a range of items of varying difficulty and discrimination indices. This study was done to understand the process of item analysis and analyze MCQ test so that a valid and reliable MCQ bank in otorhinolaryngology is developed.</p><p class="abstract"><strong>Methods:</strong> 158 students of 7<sup>th</sup> Semester were given an 8 item MCQ test. Based on the marks achieved, the high achievers (top 33%, 52 students) and low achievers (bottom 33%, 52 students) were included in the study. The responses were tabulated in Microsoft Excel Sheet and analyzed for difficulty index, discrimination index and distractor efficiency.  </p><p class="abstract"><strong>Results:</strong> The mean (SD) difficulty index (Diff-I) of 8 item test was 61.41% (11.81%). 5 items had a very good difficulty index (41% to 60%), while 3 items were easy (Diff-I &gt;60%). There was no item with Diff-I &lt;30%, i.e. a difficult item, in this test. The mean (SD) discrimination index (DI) of the test was 0.48 (0.15), and all items had very good discrimination indices of more than 0.25. Out of 24 distractors, 6 (25%) were non-functional distractors (NFDs). The mean (SD) distractor efficiency (DE) of the test was 74.62% (23.79%).</p><p class="abstract"><strong>Conclusions:</strong> Item analysis should be an integral and regular activity in each department so that a valid and reliable MCQ question bank is developed.</p>


Author(s):  
Manju K. Nair ◽  
Dawnji S. R.

Background: Carefully constructed, high quality multiple choice questions can serve as effective tools to improve standard of teaching. This item analysis was performed to find the difficulty index, discrimination index and number of non functional distractors in single best response type questions.Methods: 40 single best response type questions with four options, each carrying one mark for the correct response, was taken for item analysis. There was no negative marking. The maximum marks was 40. Based on the scores, the evaluated answer scripts were arranged with the highest score on top and the least score at the bottom. Only the upper third and lower third were included. The response to each item was entered in Microsoft excel 2010. Difficulty index, Discrimination index and number of non functional distractors per item were calculated.Results: 40 multiple choice questions and 120 distractors were analysed in this study. 72.5% items were good with a difficulty index between 30%-70%. 25% items were difficult and 2.5% items were easy. 27.5% items showed excellent discrimination between high scoring and low scoring students. One item had a negative discrimination index (-0.1). There were 9 items with non functional distractors.Conclusions: This study emphasises the need for improving the quality of multiple choice questions. Hence repeated evaluation by item analysis and modification of non functional distractors may be performed to enhance standard of teaching in Pharmacology.


Author(s):  
Amit P. Date ◽  
Archana S. Borkar ◽  
Rupesh T. Badwaik ◽  
Riaz A. Siddiqui ◽  
Tanaji R. Shende ◽  
...  

Background: Multiple choice questions (MCQs) are a common method for formative and summative assessment of medical students. Item analysis enables identifying good MCQs based on difficulty index (DIF I), discrimination index (DI), distracter efficiency (DE). The objective of this study was to assess the quality of MCQs currently in use in pharmacology by item analysis and develop a MCQ bank with quality items.Methods: This cross-sectional study was conducted in 148 second year MBBS students at NKP Salve institute of medical sciences from January 2018 to August 2018. Forty MCQs twenty each from the two term examination of pharmacology were taken for item analysis A correct response to an item was awarded one mark and each incorrect response was awarded zero. Each item was analyzed using Microsoft excel sheet for three parameters such as DIF I, DI, and DE.Results: In present study mean and standard deviation (SD) for Difficulty index (%) Discrimination index (%) and Distractor efficiency (%) were 64.54±19.63, 0.26±0.16 and 66.54±34.59 respectively. Out of 40 items large number of MCQs has acceptable level of DIF (70%) and good in discriminating higher and lower ability students DI (77.5%). Distractor efficiency related to presence of zero or 1 non-functional distrator (NFD) is 80%.Conclusions: The study showed that item analysis is a valid tool to identify quality items which regularly incorporated can help to develop a very useful, valid and a reliable question bank.


2017 ◽  
Vol 7 (1) ◽  
pp. 2-7
Author(s):  
Md Ahsan Habib ◽  
Humayun Kabir Talukder ◽  
Md Mahbubur Rahman ◽  
Shahnila Ferdousi

Multiple choice questions (MCQs) have considerable role in the preclinical medical assessment, both formative as well as summative. This cross sectional descriptive study was conducted to observe the quality of MC items (completion type) of anatomy, biochemistry and physiology used in preclinical undergraduate medical examinations of 2012 and 2013 of a public university of Bangladesh. Each MC item had a stem and 5 options, and 1200 options were analyzed for difficulty and discrimination indices. Total 556 options were false statements (distracters) and were analyzed to observe their effectiveness as distracter. The study revealed that 18.67% of options were with appropriate difficulty (0.660.80). Highest frequency (43.5%) of difficulty indices was in easy class interval (0.911). Over all frequencies of items of three subjects in the ascending order were difficult, appropriate, marginal and easy as per their difficulty indices. Satisfactory or better discrimination indices (=0.20) were observed in 29.33% options. The mean difficulty and discrimination indices observed were respectively 0.82±0.18 (95% confidence interval [CI] 0.81 to 0.83) and 0.13±0.14 (95% CI 0.122 to 0.138). Out of the options, 6.75% had negative discrimination indices. Items with difficulty index around 0.60 had maximum discriminatory power (up to 0.68) and more difficult as well as easy items had less discriminatory ability. Out of the distracters 83.45% were observed effective and the mean effectiveness was 22.3±18.7% (95% CI 20.75% to 23.85%). The study recommended using the method and findings to improve the quality of the items leading to development of a standard Question Bank.Bangladesh Journal of Medical Education Vol.7(1) 2016: 2-7


Sign in / Sign up

Export Citation Format

Share Document