computerized adaptive tests
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 12)

H-INDEX

18
(FIVE YEARS 2)

2020 ◽  
pp. 001316442097965
Author(s):  
Adam E. Wyse

An essential question when computing test–retest and alternate forms reliability coefficients is how many days there should be between tests. This article uses data from reading and math computerized adaptive tests to explore how the number of days between tests impacts alternate forms reliability coefficients. Results suggest that the highest alternate forms reliability coefficients were obtained when the second test was administered at least 2 to 3 weeks after the first test. Even though reliability coefficients after this amount of time were often similar, results suggested a potential tradeoff in waiting longer to retest as student ability tended to grow with time. These findings indicate that if keeping student ability similar is a concern that the best time to retest is shortly after 3 weeks have passed since the first test. Additional analyses suggested that alternate forms reliability coefficients were lower when tests were shorter and that narrowing the first test ability distribution of examinees also impacted estimates. Results did not appear to be largely impacted by differences in first test average ability, student demographics, or whether the student took the test under standard or extended time. It is suggested that for math and reading tests, like the ones analyzed in this article, the optimal retest interval would be shortly after 3 weeks have passed since the first test.


2020 ◽  
Vol 59 (11) ◽  
pp. 1264-1273 ◽  
Author(s):  
Robert D. Gibbons ◽  
David J. Kupfer ◽  
Ellen Frank ◽  
Benjamin B. Lahey ◽  
Brandie A. George-Milford ◽  
...  

2020 ◽  
Vol 44 (7-8) ◽  
pp. 531-547
Author(s):  
Johan Braeken ◽  
Muirne C. S. Paap

Fixed-precision between-item multidimensional computerized adaptive tests (MCATs) are becoming increasingly popular. The current generation of item-selection rules used in these types of MCATs typically optimize a single-valued objective criterion for multivariate precision (e.g., Fisher information volume). In contrast, when all dimensions are of interest, the stopping rule is typically defined in terms of a required fixed marginal precision per dimension. This asymmetry between multivariate precision for selection and marginal precision for stopping, which is not present in unidimensional computerized adaptive tests, has received little attention thus far. In this article, we will discuss this selection-stopping asymmetry and its consequences, and introduce and evaluate three alternative item-selection approaches. These alternatives are computationally inexpensive, easy to communicate and implement, and result in effective fixed-marginal-precision MCATs that are shorter in test length than with the current generation of item-selection approaches.


2020 ◽  
Vol 9 (7) ◽  
pp. 3 ◽  
Author(s):  
Eva K. Fenwick ◽  
John Barnard ◽  
Alfred Gan ◽  
Bao Sheng Loe ◽  
Jyoti Khadka ◽  
...  

2020 ◽  
Vol 143 ◽  
pp. 113066 ◽  
Author(s):  
Javier Rodríguez-Cuadrado ◽  
David Delgado-Gómez ◽  
Juan C. Laria ◽  
Sara Rodríguez-Cuadrado

2020 ◽  
Vol 80 (5) ◽  
pp. 955-974
Author(s):  
Lihong Yang ◽  
Mark D. Reckase

The present study extended the p-optimality method to the multistage computerized adaptive test (MST) context in developing optimal item pools to support different MST panel designs under different test configurations. Using the Rasch model, simulated optimal item pools were generated with and without practical constraints of exposure control. A total number of 72 simulated optimal item pools were generated and evaluated by an overall sample and conditional sample using various statistical measures. Results showed that the optimal item pools built with the p-optimality method provide sufficient measurement accuracy under all simulated MST panel designs. Exposure control affected the item pool size, but not the item distributions and item pool characteristics. This study demonstrated that the p-optimality method can adapt to MST item pool design, facilitate the MST assembly process, and improve its scoring accuracy.


Sign in / Sign up

Export Citation Format

Share Document