Effects of Item Calibration Errors on Computerized Adaptive Testing under Cognitive Diagnosis Models

2018 ◽  
Vol 35 (3) ◽  
pp. 437-465 ◽  
Author(s):  
Hung-Yu Huang
2013 ◽  
Vol 20 (4) ◽  
pp. 616-626
Author(s):  
Xiao-Juan TANG ◽  
Shu-Liang DING ◽  
Zong-Huo YU

1994 ◽  
Vol 18 (3) ◽  
pp. 197-204 ◽  
Author(s):  
Rebecca D. Hetter ◽  
Daniel O. Segall ◽  
Bruce M. Bloxom

2020 ◽  
pp. 014662162097768
Author(s):  
Miguel A. Sorrel ◽  
Francisco José Abad ◽  
Pablo Nájera

Decisions on how to calibrate an item bank might have major implications in the subsequent performance of the adaptive algorithms. One of these decisions is model selection, which can become problematic in the context of cognitive diagnosis computerized adaptive testing, given the wide range of models available. This article aims to determine whether model selection indices can be used to improve the performance of adaptive tests. Three factors were considered in a simulation study, that is, calibration sample size, Q-matrix complexity, and item bank length. Results based on the true item parameters, and general and single reduced model estimates were compared to those of the combination of appropriate models. The results indicate that fitting a single reduced model or a general model will not generally provide optimal results. Results based on the combination of models selected by the fit index were always closer to those obtained with the true item parameters. The implications for practical settings include an improvement in terms of classification accuracy and, consequently, testing time, and a more balanced use of the item bank. An R package was developed, named cdcatR, to facilitate adaptive applications in this context.


Sign in / Sign up

Export Citation Format

Share Document