Item Selection and Hypothesis Testing for the Adaptive Measurement of Change

2010 ◽  
Vol 34 (4) ◽  
pp. 238-254 ◽  
Author(s):  
Matthew D. Finkelman ◽  
David J. Weiss ◽  
Gyenam Kim-Kang
2008 ◽  
Vol 216 (1) ◽  
pp. 49-58 ◽  
Author(s):  
Gyenam Kim-Kang ◽  
David J. Weiss

Adaptive measurement of change (AMC) was investigated by examining the recovery of true change. Monte Carlo simulation was used to compare three conventional testing (CT) methods with AMC. The CTs estimated individual change moderately well when the test was highly discriminating and when the θ level matched the test difficulty. However, AMC measured individual change equally well across the entire range of θ. AMC with more discriminating items produced the most precise estimates of individual change. AMC was shown to be superior to CTs under all conditions examined. In addition, AMC is efficient – it can dramatically reduce the number of items necessary to measure individual change. The results indicate that AMC is a viable and effective method for measuring individual change.


2021 ◽  
pp. 001316442110339
Author(s):  
Allison W. Cooperman ◽  
David J. Weiss ◽  
Chun Wang

Adaptive measurement of change (AMC) is a psychometric method for measuring intra-individual change on one or more latent traits across testing occasions. Three hypothesis tests—a Z test, likelihood ratio test, and score ratio index—have demonstrated desirable statistical properties in this context, including low false positive rates and high true positive rates. However, the extant AMC research has assumed that the item parameter values in the simulated item banks were devoid of estimation error. This assumption is unrealistic for applied testing settings, where item parameters are estimated from a calibration sample before test administration. Using Monte Carlo simulation, this study evaluated the robustness of the common AMC hypothesis tests to the presence of item parameter estimation error when measuring omnibus change across four testing occasions. Results indicated that item parameter estimation error had at most a small effect on false positive rates and latent trait change recovery, and these effects were largely explained by the computerized adaptive testing item bank information functions. Differences in AMC performance as a function of item parameter estimation error and choice of hypothesis test were generally limited to simulees with particularly low or high latent trait values, where the item bank provided relatively lower information. These simulations highlight how AMC can accurately measure intra-individual change in the presence of item parameter estimation error when paired with an informative item bank. Limitations and future directions for AMC research are discussed.


2017 ◽  
Vol 33 (6) ◽  
pp. 409-421 ◽  
Author(s):  
Anne B. Janssen ◽  
Martin Schultze ◽  
Adrian Grötsch

Abstract. Employees’ innovative work is a facet of proactive work behavior that is of increasing interest to industrial and organizational psychologists. As proactive personality and supervisor support are key predictors of innovative work behavior, reliable, and valid employee ratings of these two constructs are crucial for organizations’ planning of personnel development measures. However, the time for assessments is often limited. The present study therefore aimed at constructing reliable short scales of two measures of proactive personality and supervisor support. For this purpose, we compared an innovative approach of item selection, namely Ant Colony Optimization (ACO; Leite, Huang, & Marcoulides, 2008 ) and classical item selection procedures. For proactive personality, the two item selection approaches provided similar results. Both five-item short forms showed a satisfactory reliability and a small, however negligible loss of criterion validity. For a two-dimensional supervisor support scale, ACO found a reliable and valid short form. Psychometric properties of the short version were in accordance with those of the parent form. A manual supervisor support short form revealed a rather poor model fit and a serious loss of validity. We discuss benefits and shortcomings of ACO compared to classical item selection approaches and recommendations for the application of ACO.


Methodology ◽  
2018 ◽  
Vol 14 (4) ◽  
pp. 177-188 ◽  
Author(s):  
Martin Schultze ◽  
Michael Eid

Abstract. In the construction of scales intended for the use in cross-cultural studies, the selection of items needs to be guided not only by traditional criteria of item quality, but has to take information about the measurement invariance of the scale into account. We present an approach to automated item selection which depicts the process as a combinatorial optimization problem and aims at finding a scale which fulfils predefined target criteria – such as measurement invariance across cultures. The search for an optimal solution is performed using an adaptation of the [Formula: see text] Ant System algorithm. The approach is illustrated using an application to item selection for a personality scale assuming measurement invariance across multiple countries.


PsycCRITIQUES ◽  
2012 ◽  
Vol 57 (4) ◽  
Author(s):  
David J. Pittenger
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document