scholarly journals Random Item Response Model Approaches to Evaluating Item Format Effects

2018 ◽  
Vol 8 (3) ◽  
pp. 98
Author(s):  
Yongsang Lee ◽  
Inyong Park

The PISA 2006 science assessment is composed of open response, multiple-choice, and constructed multiple choice items. The current study introduced the random item response models to investigate the item format effects on item difficulties, and these models include the linear logistic test model with raThe PISA 2006 science assessment is composed of open response, multiple-choice, and constructed multiple choice items. The current study introduced the random item response models to investigate the item format effects on item difficulties, and these models include the linear logistic test model with random item effects (i.e., the LLTM-R) and the hierarchical item response model (i.e., the hierarchical IRM). In this study these models were applied to the PISA 2006 science data set to explore the relationship between items' format and their difficulties. The empirical analysis results in the PISA 2006 science assessment first find that the LLTM-R and the hierachical IRM provides equivalent item difficulty estimates compared with those from the Rasch model and the LLTM, and also clearly show that the item difficulties are substantially affected by item formats. This result implies that item difficulties may be different to each other depending on the item format although they deal with the same content.ndom item effects (i.e., the LLTM-R) and the hierarchical item response model (i.e., the hierarchical IRM). In this study these models were applied to the PISA 2006 science data set to explore the relationship between items' format and their difficulties. The empirical analysis results in the PISA 2006 science assessment first find that the LLTM-R and the hierachical IRM provides equivalent item difficulty estimates compared with those from the Rasch model and the LLTM, and also clearly show that the item difficulties are substantially affected by item formats. This result implies that item difficulties may be different to each other depending on the item format although they deal with the same content.

2001 ◽  
Vol 26 (4) ◽  
pp. 381-409 ◽  
Author(s):  
Daniel M. Bolt ◽  
Allan S. Cohen ◽  
James A. Wollack

A mixture item response model is proposed for investigating individual differences in the selection of response categories in multiple-choice items. The model accounts for local dependence among response categories by assuming that examinees belong to discrete latent classes that have different propensities towards those responses. Varying response category propensities are captured by allowing the category intercept parameters in a nominal response model ( Bock, 1972 ) to assume different values across classes. A Markov Chain Monte Carlo algorithm for the estimation of model parameters and classification of examinees is described. A real-data example illustrates how the model can be used to distinguish examinees that are disproportionately attracted to different types of distractors in a test of English usage. A simulation study evaluates item parameter recovery and classification accuracy in a hypothetical multiple-choice test designed to be diagnostic. Implications for test construction and the use of multiple-choice tests to perform cognitive diagnoses of item response patterns are discussed.


2014 ◽  
Vol 28 (1) ◽  
pp. 1-23 ◽  
Author(s):  
Jorge Luis Bazán ◽  
Márcia D. Branco ◽  
Heleno Bolfarine

2011 ◽  
Vol 6 (3) ◽  
pp. 354-398 ◽  
Author(s):  
Katharine O. Strunk

Increased spending and decreased student performance have been attributed in part to teachers' unions and to the collective bargaining agreements (CBAs) they negotiate with school boards. However, only recently have researchers begun to examine impacts of specific aspects of CBAs on student and district outcomes. This article uses a unique measure of contract restrictiveness generated through the use of a partial independence item response model to examine the relationships between CBA strength and district spending on multiple areas and district-level student performance in California. I find that districts with more restrictive contracts have higher spending overall, but that this spending appears not to be driven by greater compensation for teachers but by greater expenditures on administrators' compensation and instruction-related spending. Although districts with stronger CBAs spend more overall and on these categories, they spend less on books and supplies and on school board–related expenditures. In addition, I find that contract restrictiveness is associated with lower average student performance, although not with decreased achievement growth.


1989 ◽  
Vol 68 (3) ◽  
pp. 987-1000 ◽  
Author(s):  
Elisabeth Tenvergert ◽  
Johannes Kingma ◽  
Terry Taerum

MOKSCAL is a program for the Mokken (1971) scale analysis based on a nonparametric item response model that makes no assumptions about the functional form of the item trace lines. The only constraint the Mokken model puts on the trace lines is the assumption of double monotony; that is, the item trace lines must be nondecreasing and the lines are not allowed to cross. MOKSCAL provides three procedures of scaling: a search procedure, an evaluation of the whole set of items, and an extension of an existing scale. All procedures provide a coefficient of scalability for all items that meet the criteria of the Mokken model and an item coefficient of scalability of every item. A test of robustness of the found scale can be performed to analyze whether the scale is invariant across different subgroups or samples. This robustness test may serve as a goodness-of-fit test for the established scale. The program is written in FORTRAN 77 and is suitable for both mainframe and microcomputers.


Sign in / Sign up

Export Citation Format

Share Document