Analyzing the Distractors of Multiple-Choice Test Items or Partitioning Multinomial Cell Probabilities with Respect to a Standard

1981 ◽  
Vol 41 (4) ◽  
pp. 1051-1068 ◽  
Author(s):  
Rand R. Wilcox

When analyzing the distractors of multiple-choice test items, it is sometimes desired to determine which of the distractors has a small probability of being chosen by a typical examinee. At present, this problem is handled in an informal manner. In particular, using an arbitrary number of examinees, the probabilities associated with the distractors are estimated and then sorted according to whether the estimated values are above or below a known constant p0 In this paper a more formal framework for solving this problem is described. The first portion of the paper considers the problem from the point of view of designing an experiment. The solution is based on a procedure similar to an indifference zone formulation of a ranking and election problem. A later section considers methods that might be employed in a retrospective study. Brief consideration is also given to how an analysis might proceed when a test item has been altered in some way.

1998 ◽  
Vol 14 (3) ◽  
pp. 197-201 ◽  
Author(s):  
Ana R. Delgado ◽  
Gerardo Prieto

This study examined the validity of an item-writing rule concerning the optimal number of options in the design of multiple-choice test items. Although measurement textbooks typically recommend the use of four or five options - and most ability and achievement tests still follow this rule - theoretical papers as well as empirical research over a period of more than half a century reveal that three options may be more suitable for most ability and achievement test items. Previous results show that three-option items, compared with their four-option versions, tend to be slightly easier (i. e., with higher traditional difficulty indexes) without showing any decrease in discrimination. In this study, two versions (with four and three options) of 90 items comprising three computerized examinations were applied in successive years, showing the expected trend. In addition, there were no systematic changes in reliability for the tests, which adds to the evidence favoring the use of the three-option test item.


2019 ◽  
Vol 5 (1) ◽  
pp. 10-20
Author(s):  
Kartika Pramudita ◽  
R. Rosnawati ◽  
Socheath Mam

The study was aimed at describing five methods of the development of parallel test items of the multiple-choice type in mathematics at Yogyakarta (primary education level). The study was descriptive research involving 22 mathematics teachers as the respondents. Data collection was conducted through interviews and document reviews concerning the developed test packages. A questionnaire was used to gather data about the procedure the teachers employed in developing the tests. Findings show that the teachers used five methods in developing the test item; namely (1) randomizing the item numbers; (2) randomizing the sequences of response options; (3) writing items using the same contexts but different figures; (4) using anchor items; and (5) writing different items based on the same specification table. All of the respondents stated that they developed the table of the specification before developing the test items and that most of them (77%) did the validation of the instruments in content and language.


2010 ◽  
Vol 35 (1) ◽  
pp. 12-16 ◽  
Author(s):  
Sandra L. Clifton ◽  
Cheryl L. Schriner

1988 ◽  
Vol 25 (3) ◽  
pp. 247-250 ◽  
Author(s):  
Rand R. Wilcox ◽  
Karen Thompson Wilcox ◽  
Jacob Chung

Sign in / Sign up

Export Citation Format

Share Document