A Sharing Item Response Theory Model for Computerized Adaptive Testing

2004 ◽  
Vol 29 (4) ◽  
pp. 439-460 ◽  
Author(s):  
Daniel O. Segall

A new sharing item response theory (SIRT) model is presented that explicitly models the effects of sharing item content between informants and test takers. This model is used to construct adaptive item selection and scoring rules that provide increased precision and reduced score gains in instances where sharing occurs. The adaptive item selection rules are expressed as functions of the item’s exposure rate in addition to other commonly used properties (characterized by difficulty, discrimination, and guessing parameters). Based on the results of simulated item responses, the new item selection and scoring algorithms compare favorably to the Sympson–Hetter exposure control method. The new SIRT approach provides higher reliability and lower score gains in instances where sharing occurs.

2012 ◽  
Vol 40 (10) ◽  
pp. 1679-1694 ◽  
Author(s):  
Wen-Wei Liao ◽  
Rong-Guey Ho ◽  
Yung-Chin Yen ◽  
Hsu-Chen Cheng

In computerized adaptive testing (CAT), aberrant responses such as careless errors and lucky guesses may cause significant ability estimation biases in the dynamic administration of test items. We investigated the robustness of the 4-parameter logistic item response theory (4PL IRT; Barton & Lord, 1981) model in comparison with the 3-parameter logistic (3PL) IRT model (Birnbaum, 1968). We applied additional precision and efficiency measures to evaluate the 4PL IRT model. We measured the precision of CAT with respect to the estimation bias and mean absolute differences (MAD) between estimated and actual abilities. An improvement in administrative efficiency is reflected in fewer items being required for satisfying the stopping rule. Our results indicate that the 4PL IRT model provides a more efficient and robust ability estimation than the 3PL model.


2021 ◽  
pp. 001316442199841
Author(s):  
Pere J. Ferrando ◽  
David Navarro-González

Item response theory “dual” models (DMs) in which both items and individuals are viewed as sources of differential measurement error so far have been proposed only for unidimensional measures. This article proposes two multidimensional extensions of existing DMs: the M-DTCRM (dual Thurstonian continuous response model), intended for (approximately) continuous responses, and the M-DTGRM (dual Thurstonian graded response model), intended for ordered-categorical responses (including binary). A rationale for the extension to the multiple-content-dimensions case, which is based on the concept of the multidimensional location index, is first proposed and discussed. Then, the models are described using both the factor-analytic and the item response theory parameterizations. Procedures for (a) calibrating the items, (b) scoring individuals, (c) assessing model appropriateness, and (d) assessing measurement precision are finally discussed. The simulation results suggest that the proposal is quite feasible, and an illustrative example based on personality data is also provided. The proposals are submitted to be of particular interest for the case of multidimensional questionnaires in which the number of items per scale would not be enough for arriving at stable estimates if the existing unidimensional DMs were fitted on a separate-scale basis.


1991 ◽  
Vol 1991 (1) ◽  
pp. i-31 ◽  
Author(s):  
Martha L. Stocking ◽  
Len Swanson ◽  
Mari Pearlman

2019 ◽  
Vol 80 (4) ◽  
pp. 695-725
Author(s):  
Leah M. Feuerstahler ◽  
Niels Waller ◽  
Angus MacDonald

Although item response models have grown in popularity in many areas of educational and psychological assessment, there are relatively few applications of these models in experimental psychopathology. In this article, we explore the use of item response models in the context of a computerized cognitive task designed to assess visual working memory capacity in people with psychosis as well as healthy adults. We begin our discussion by describing how item response theory can be used to evaluate and improve unidimensional cognitive assessment tasks in various examinee populations. We then suggest how computerized adaptive testing can be used to improve the efficiency of cognitive task administration. Finally, we explore how these ideas might be extended to multidimensional item response models that better represent the complex response processes underlying task performance in psychopathological populations.


Sign in / Sign up

Export Citation Format

Share Document