scholarly journals Multidimensional Computerized Adaptive Testing Using Non-Compensatory Item Response Theory Models

2018 ◽  
Vol 43 (6) ◽  
pp. 464-480
Author(s):  
Chia-Ling Hsu ◽  
Wen-Chung Wang

Current use of multidimensional computerized adaptive testing (MCAT) has been developed in conjunction with compensatory multidimensional item response theory (MIRT) models rather than with non-compensatory ones. In recognition of the usefulness of MCAT and the complications associated with non-compensatory data, this study aimed to develop MCAT algorithms using non-compensatory MIRT models and to evaluate their performance. For the purpose of the study, three item selection methods were adapted and compared, namely, the Fisher information method, the mutual information method, and the Kullback–Leibler information method. The results of a series of simulations showed that the Fisher information and mutual information methods performed similarly, and both outperformed the Kullback–Leibler information method. In addition, it was found that the more stringent the termination criterion and the higher the correlation between the latent traits, the higher the resulting measurement precision and test reliability. Test reliability was very similar across the dimensions, regardless of the correlation between the latent traits and termination criterion. On average, the difficulties of the administered items were found to be at a lower level than the examinees’ abilities, which shed light on item bank construction for non-compensatory items.

2020 ◽  
Vol 44 (3) ◽  
pp. 142-181

The current study investigated the feasibility of developing a computerized adaptive form of Raven’s Colored Progressive Matrices test, one of the most important culture-free intelligent tests, using the Item Response Theory. The test consists of 36 items divided into three groups. The data used in the current study were adapted from the study by Kadhim et al. (2008), which included 1042 subjects, aged 5 to 10 from both genders and distributed into 11 Omani governorates. Item Response Theory assumptions were met and then the mirtCAT package was used to evaluate a computerized adaptive form of the test. Raven’s test items were compatible with the three-parameter model which was used to scale the test items. The Maximum Fisher Information method was used to select items in the adaptive form. The full and the adaptive forms were compared to each other across the different simulated conditions in the current study. The results indicated that using 17 items of the adaptive form could accurately estimate the subjects’ abilities without a substantial loss of information. The previous finding is a preliminary indication of the possibility of developing an adaptive form of the colored progressive matrices test that can be used in various assessments that practitioners may need to assess, classify or diagnose children.


2017 ◽  
Vol 41 (7) ◽  
pp. 530-544 ◽  
Author(s):  
Dubravka Svetina ◽  
Arturo Valdivia ◽  
Stephanie Underhill ◽  
Shenghai Dai ◽  
Xiaolin Wang

Information about the psychometric properties of items can be highly useful in assessment development, for example, in item response theory (IRT) applications and computerized adaptive testing. Although literature on parameter recovery in unidimensional IRT abounds, less is known about parameter recovery in multidimensional IRT (MIRT), notably when tests exhibit complex structures or when latent traits are nonnormal. The current simulation study focuses on investigation of the effects of complex item structures and the shape of examinees’ latent trait distributions on item parameter recovery in compensatory MIRT models for dichotomous items. Outcome variables included bias and root mean square error. Results indicated that when latent traits were skewed, item parameter recovery was generally adversely impacted. In addition, the presence of complexity contributed to decreases in the precision of parameter recovery, particularly for discrimination parameters along one dimension when at least one latent trait was generated as skewed.


2019 ◽  
Vol 80 (4) ◽  
pp. 695-725
Author(s):  
Leah M. Feuerstahler ◽  
Niels Waller ◽  
Angus MacDonald

Although item response models have grown in popularity in many areas of educational and psychological assessment, there are relatively few applications of these models in experimental psychopathology. In this article, we explore the use of item response models in the context of a computerized cognitive task designed to assess visual working memory capacity in people with psychosis as well as healthy adults. We begin our discussion by describing how item response theory can be used to evaluate and improve unidimensional cognitive assessment tasks in various examinee populations. We then suggest how computerized adaptive testing can be used to improve the efficiency of cognitive task administration. Finally, we explore how these ideas might be extended to multidimensional item response models that better represent the complex response processes underlying task performance in psychopathological populations.


2020 ◽  
Vol 35 (7) ◽  
pp. 1094-1108
Author(s):  
Morgan E Nitta ◽  
Brooke E Magnus ◽  
Paul S Marshall ◽  
James B Hoelzle

Abstract There are many challenges associated with assessment and diagnosis of ADHD in adulthood. Utilizing the graded response model (GRM) from item response theory (IRT), a comprehensive item-level analysis of adult ADHD rating scales in a clinical population was conducted with Barkley's Adult ADHD Rating Scale-IV, Self-Report of Current Symptoms (CSS), a self-report diagnostic checklist and a similar self-report measure quantifying retrospective report of childhood symptoms, Barkley's Adult ADHD Rating Scale-IV, Self-Report of Childhood Symptoms (BAARS-C). Differences in item functioning were also considered after identifying and excluding individuals with suspect effort. Items associated with symptoms of inattention (IA) and hyperactivity/impulsivity (H/I) are endorsed differently across the lifespan, and these data suggest that they vary in their relationship to the theoretical constructs of IA and H/I. Screening for sufficient effort did not meaningfully change item level functioning. The application IRT to direct item-to-symptom measures allows for a unique psychometric assessment of how the current DSM-5 symptoms represent latent traits of IA and H/I. Meeting a symptom threshold of five or more symptoms may be misleading. Closer attention given to specific symptoms in the context of the clinical interview and reported difficulties across domains may lead to more informed diagnosis.


2019 ◽  
Vol 45 (3) ◽  
pp. 274-296
Author(s):  
Yang Liu ◽  
Xiaojing Wang

Parametric methods, such as autoregressive models or latent growth modeling, are usually inflexible to model the dependence and nonlinear effects among the changes of latent traits whenever the time gap is irregular and the recorded time points are individually varying. Often in practice, the growth trend of latent traits is subject to certain monotone and smooth conditions. To incorporate such conditions and to alleviate the strong parametric assumption on regressing latent trajectories, a flexible nonparametric prior has been introduced to model the dynamic changes of latent traits for item response theory models over the study period. Suitable Bayesian computation schemes are developed for such analysis of the longitudinal and dichotomous item responses. Simulation studies and a real data example from educational testing have been used to illustrate our proposed methods.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e12100
Author(s):  
Marco Tullio Liuzza ◽  
Rocco Spagnuolo ◽  
Gabriella Antonucci ◽  
Rosa Daniela Grembiale ◽  
Cristina Cosco ◽  
...  

Background There has recently been growing interest in the roles of inflammation in contributing to the development of anxiety in people with immune-mediated inflammatory diseases (IMID). Patient-reported outcome measures can facilitate the assessment of physical and psychological functioning. The National Institutes of Health (NIH)’s Patient-Reported Outcomes Measurement Information System (PROMIS®) is a set of Patient-Reported Outcomes (PROs) that cover physical appearance, mental health, and social health. The PROMIS has been built through an Item Response Theory approach (IRT), a model-based measurement in which trait level estimates depend on both persons’ responses and on the properties of the items that were administered. The aim of this study is to test the psychometric properties of an Italian custom four-item Short Form of the PROMIS Anxiety item bank in a cohort of outpatients with IMIDs. Methods We selected four items from the Italian standard Short Form Anxiety 8a and administered them to consecutive outpatients affected by Inflammatory Bowel disease (n = 246), rheumatological (n = 100) and dermatological (n = 43) diseases, and healthy volunteers (n = 280). Data was analyzed through an Item Response Theory (IRT) analysis in order to evaluate the psychometric properties of the Italian adaptation of the PROMIS anxiety short form. Results Taken together, Confirmatory Factor Analysis and Exploratory Factor analysis suggest that the unidimensionality assumption of the instrument holds. The instrument has excellent reliability from a Classical Theory of Test (CTT) standpoint (Cronbach’s α = 0.93, McDonald’s ω = 0.92). The 2PL Graded Response Model (GRM) model provided showed a better goodness of fit as compared to the 1PL GRM model, and local independence assumption appears to be met overall. We did not find signs of differential item functioning (DIF) for age and gender, but evidence for uniform (but not non-uniform) DIF was found in three out of four items for the patient vs. control group. Analysis of the test reliability curve suggested that the instrument is most reliable for higher levels of the latent trait of anxiety. The groups of patients exhibited higher levels of anxiety as compared to the control group (ps < 0.001, Bonferroni-corrected). The groups of patients were not different between themselves (p = 1, Bonferroni-corrected). T-scores based on estimated latent trait and raw scores were highly correlated (Pearson’s r = 0.98) and led to similar results. Discussion The Italian custom four-item short form from the PROMIS anxiety form 8a shows acceptable psychometric properties both from a CTT and an IRT standpoint. The Test Reliability Curve shows that this instrument is mostly informative for people with higher levels of anxiety, making it particularly suitable for clinical populations such as IMID patients.


2021 ◽  
Vol 39 (1) ◽  
pp. 206
Author(s):  
Naiara Caroline Aparecido dos SANTOS ◽  
Jorge Luiz BAZÁN

A Rasch Poisson counts (RPC) model is described to identify individual latent traits and facilities of the items of tests that model the error (or success) count in several tasks over time, instead of modeling the correct responses to items in a test as in the dichotomous item response theory (IRT) model. These types of tests can be more informative than traditional tests. To estimate the model parameters, we consider a Bayesian approach using the integrated nested Laplace approximation (INLA). We develop residual analysis to assess model t by introducing randomized quantile residuals for items. The data used to illustrate the method comes from 228 people who took a selective attention test. The test has 20 blocks (items), with a time limit of 15 seconds for each block. The results of the residual analysis of the RPC were promising and indicated that the studied attention data are not well tted by the RPC model.


2006 ◽  
Author(s):  
Daniel A. Sass ◽  
Cindy M. Walker ◽  
Thomas A. Schmitt

2012 ◽  
Vol 17 (1-2) ◽  
pp. 61-68
Author(s):  
Ryszard Gmoch

Abstract New trends relating to computer-based testing of learners’ achievements are presented in the paper. It describes adaptive testing methods and results of studies in this problem area. Essential questions connected with the Item Response Theory (IRT) were also discussed. The presented data indicate that computer-based adaptive testing should be popularized in Poland to its fullest extent.


Sign in / Sign up

Export Citation Format

Share Document