The Effect of the Most-Attractive-Distractor Location on Multiple-Choice Item Difficulty

2019 ◽  
Vol 88 (4) ◽  
pp. 643-659
Author(s):  
Jinnie Shin ◽  
Okan Bulut ◽  
Mark J. Gierl
Author(s):  
Bettina Hagenmüller

Abstract. The multiple-choice item format is widely used in test construction and Large-Scale Assessment. So far, there has been little research on the impact of the position of the solution among the response options and the few existing results are even inconsistent. Since it would be an easy way to create parallel items for group setting by altering the response options, the influence of the response options’ position on item difficulty should be examined. The Linear Logistic Test Model ( Fischer, 1972 ) was used to analyze the data of 829 students aged 8–20 years, who worked on general knowledge items. It was found that the position of the solution among the response options has an influence on item difficulty. Items are easiest when the solution is in first place and more difficult when the solution is placed in a middle position or at the end of the set of response options.


2011 ◽  
Vol 35 (4) ◽  
pp. 396-401 ◽  
Author(s):  
Jonathan D. Kibble ◽  
Teresa Johnson

The purpose of this study was to evaluate whether multiple-choice item difficulty could be predicted either by a subjective judgment by the question author or by applying a learning taxonomy to the items. Eight physiology faculty members teaching an upper-level undergraduate human physiology course consented to participate in the study. The faculty members annotated questions before exams with the descriptors “easy,” “moderate,” or “hard” and classified them according to whether they tested knowledge, comprehension, or application. Overall analysis showed a statistically significant, but relatively low, correlation between the intended item difficulty and actual student scores (ρ = −0.19, P < 0.01), indicating that, as intended item difficulty increased, the resulting student scores on items tended to decrease. Although this expected inverse relationship was detected, faculty members were correct only 48% of the time when estimating difficulty. There was also significant individual variation among faculty members in the ability to predict item difficulty (χ2 = 16.84, P = 0.02). With regard to the cognitive level of items, no significant correlation was found between the item cognitive level and either actual student scores (ρ = −0.09, P = 0.14) or item discrimination (ρ = 0.05, P = 0.42). Despite the inability of faculty members to accurately predict item difficulty, the examinations were of high quality, as evidenced by reliability coefficients (Cronbach's α) of 0.70–0.92, the rejection of only 4 of 300 items in the postexamination review, and a mean item discrimination (point biserial) of 0.37. In conclusion, the effort of assigning annotations describing intended difficulty and cognitive levels to multiple-choice items is of doubtful value in terms of controlling examination difficulty. However, we also report that the process of annotating questions may enhance examination validity and can reveal aspects of the hidden curriculum.


Author(s):  
Ahmad S. Audeh

The original Guilford formula for estimation of multiple choice item difficulty was based on a penalty for guessing. This penalty was originally based on completely random or blind guessing, which means that it is purely based on mathematical estimation and on significantly violated assumptions. While authentic and fair estimation is expected to be based on mixed scoring formula which adds another correction factor to integrate measurement theory with decision theory based on partial knowledge and risk- taking behavior. A new formula with two correction factors related to guessing, partial knowledge and risk-taking is presented in this paper. Further studies are suggested for reviewing the validation of the main assumptions of item theory models. 


2010 ◽  
Vol 65 (4) ◽  
pp. 257-282
Author(s):  
전유아 ◽  
신택수

1952 ◽  
Vol 43 (6) ◽  
pp. 364-368 ◽  
Author(s):  
Scarvia B. Anderson

2021 ◽  
pp. 9-10
Author(s):  
Bhoomika R. Chauhan ◽  
Jayesh Vaza ◽  
Girish R. Chauhan ◽  
Pradip R. Chauhan

Multiple choice questions are nowadays used in competitive examination and formative assessment to assess the student's eligibility and certification.Item analysis is the process of collecting,summarizing and using information from students' responses to assess the quality of test items.Goal of the study was to identify the relationship between the item difficulty index and item discriminating index in medical student's assessment. 400 final year medical students from various medical colleges responded 200 items constructed for the study.The responses were assessed and analysed for item difficulty index and item discriminating power. Item difficulty index an item discriminating power were analysed by statical methods to identify correlation.The discriminating power of the items with difficulty index in 40%-50% was the highest. Summary and Conclusion:Items with good difficulty index in range of 30%-70% are good discriminator.


Sign in / Sign up

Export Citation Format

Share Document