Estimating Skewed Latent Traits Within Item-Response Theory

2006 ◽  
Author(s):  
Daniel A. Sass ◽  
Cindy M. Walker ◽  
Thomas A. Schmitt
2020 ◽  
Vol 35 (7) ◽  
pp. 1094-1108
Author(s):  
Morgan E Nitta ◽  
Brooke E Magnus ◽  
Paul S Marshall ◽  
James B Hoelzle

Abstract There are many challenges associated with assessment and diagnosis of ADHD in adulthood. Utilizing the graded response model (GRM) from item response theory (IRT), a comprehensive item-level analysis of adult ADHD rating scales in a clinical population was conducted with Barkley's Adult ADHD Rating Scale-IV, Self-Report of Current Symptoms (CSS), a self-report diagnostic checklist and a similar self-report measure quantifying retrospective report of childhood symptoms, Barkley's Adult ADHD Rating Scale-IV, Self-Report of Childhood Symptoms (BAARS-C). Differences in item functioning were also considered after identifying and excluding individuals with suspect effort. Items associated with symptoms of inattention (IA) and hyperactivity/impulsivity (H/I) are endorsed differently across the lifespan, and these data suggest that they vary in their relationship to the theoretical constructs of IA and H/I. Screening for sufficient effort did not meaningfully change item level functioning. The application IRT to direct item-to-symptom measures allows for a unique psychometric assessment of how the current DSM-5 symptoms represent latent traits of IA and H/I. Meeting a symptom threshold of five or more symptoms may be misleading. Closer attention given to specific symptoms in the context of the clinical interview and reported difficulties across domains may lead to more informed diagnosis.


2019 ◽  
Vol 45 (3) ◽  
pp. 274-296
Author(s):  
Yang Liu ◽  
Xiaojing Wang

Parametric methods, such as autoregressive models or latent growth modeling, are usually inflexible to model the dependence and nonlinear effects among the changes of latent traits whenever the time gap is irregular and the recorded time points are individually varying. Often in practice, the growth trend of latent traits is subject to certain monotone and smooth conditions. To incorporate such conditions and to alleviate the strong parametric assumption on regressing latent trajectories, a flexible nonparametric prior has been introduced to model the dynamic changes of latent traits for item response theory models over the study period. Suitable Bayesian computation schemes are developed for such analysis of the longitudinal and dichotomous item responses. Simulation studies and a real data example from educational testing have been used to illustrate our proposed methods.


2017 ◽  
Vol 19 (1) ◽  
pp. 91-102 ◽  
Author(s):  
Jacob Kean ◽  
Erica F. Bisson ◽  
Darrel S. Brodke ◽  
Joshua Biber ◽  
Paul H. Gross

Item response theory has its origins in educational measurement and is now commonly applied in health-related measurement of latent traits, such as function and symptoms. This application is due, in large part, to gains in the precision of measurement attributable to item response theory and corresponding decreases in response burden, study costs, and study duration. The purpose of this paper is twofold: introduce basic concepts of item response theory and demonstrate this analytic approach in a worked example, a Rasch model (1PL) analysis of the Eating Assessment Tool (EAT-10), a commonly used measure for oropharyngeal dysphagia. The results of the analysis were largely concordant with previous studies of the EAT-10 and illustrate for brain impairment clinicians and researchers how IRT analysis can yield greater precision of measurement.


2016 ◽  
Vol 16 (2) ◽  
pp. 163-174 ◽  
Author(s):  
Justyna Brzezińska

Abstract Item Response Theory (IRT) is a modern statistical method using latent variables designed to model the interaction between a subject’s ability and the item level stimuli (difficulty, guessing). Item responses are treated as the outcome (dependent) variables, and the examinee’s ability and the items’ characteristics are the latent predictor (independent) variables. IRT models the relationship between a respondent’s trait (ability, attitude) and the pattern of item responses. Thus, the estimation of individual latent traits can differ even for two individuals with the same total scores. IRT scores can yield additional benefits and this will be discussed in detail. In this paper theory and application with R software with the use of packages designed for modelling IRT will be presented.


2017 ◽  
Vol 41 (8) ◽  
pp. 600-613 ◽  
Author(s):  
Wen-Chung Wang ◽  
Xue-Lan Qiu ◽  
Chia-Wen Chen ◽  
Sage Ro ◽  
Kuan-Yu Jin

There is re-emerging interest in adopting forced-choice items to address the issue of response bias in Likert-type items for noncognitive latent traits. Multidimensional pairwise comparison (MPC) items are commonly used forced-choice items. However, few studies have been aimed at developing item response theory models for MPC items owing to the challenges associated with ipsativity. Acknowledging that the absolute scales of latent traits are not identifiable in ipsative tests, this study developed a Rasch ipsative model for MPC items that has desirable measurement properties, yields a single utility value for each statement, and allows for comparing psychological differentiation between and within individuals. The simulation results showed a good parameter recovery for the new model with existing computer programs. This article provides an empirical example of an ipsative test on work style and behaviors.


2017 ◽  
Vol 41 (7) ◽  
pp. 530-544 ◽  
Author(s):  
Dubravka Svetina ◽  
Arturo Valdivia ◽  
Stephanie Underhill ◽  
Shenghai Dai ◽  
Xiaolin Wang

Information about the psychometric properties of items can be highly useful in assessment development, for example, in item response theory (IRT) applications and computerized adaptive testing. Although literature on parameter recovery in unidimensional IRT abounds, less is known about parameter recovery in multidimensional IRT (MIRT), notably when tests exhibit complex structures or when latent traits are nonnormal. The current simulation study focuses on investigation of the effects of complex item structures and the shape of examinees’ latent trait distributions on item parameter recovery in compensatory MIRT models for dichotomous items. Outcome variables included bias and root mean square error. Results indicated that when latent traits were skewed, item parameter recovery was generally adversely impacted. In addition, the presence of complexity contributed to decreases in the precision of parameter recovery, particularly for discrimination parameters along one dimension when at least one latent trait was generated as skewed.


2018 ◽  
Vol 79 (4) ◽  
pp. 665-687
Author(s):  
Marcelo A. da Silva ◽  
Ren Liu ◽  
Anne C. Huggins-Manley ◽  
Jorge L. Bazán

Multidimensional item response theory (MIRT) models use data from individual item responses to estimate multiple latent traits of interest, making them useful in educational and psychological measurement, among other areas. When MIRT models are applied in practice, it is not uncommon to see that some items are designed to measure all latent traits while other items may only measure one or two traits. In order to facilitate a clear expression of which items measure which traits and formulate such relationships as a math function in MIRT models, we applied the concept of the Q-matrix commonly used in diagnostic classification models to MIRT models. In this study, we introduced how to incorporate a Q-matrix into an existing MIRT model, and demonstrated benefits of the proposed hybrid model through two simulation studies and an applied study. In addition, we showed the relative ease in modeling educational and psychological data through a Bayesian approach via the NUTS algorithm.


2019 ◽  
Vol 6 (4) ◽  
pp. 205316801987956 ◽  
Author(s):  
Kyle L. Marquardt ◽  
Daniel Pemstein ◽  
Brigitte Seim ◽  
Yi-ting Wang

Experts code latent quantities for many influential political science datasets. Although scholars are aware of the importance of accounting for variation in expert reliability when aggregating such data, they have not systematically explored either the factors affecting expert reliability or the degree to which these factors influence estimates of latent concepts. Here we provide a template for examining potential correlates of expert reliability, using coder-level data for six randomly selected variables from a cross-national panel dataset. We aggregate these data with an ordinal item response theory model that parameterizes expert reliability, and regress the resulting reliability estimates on both expert demographic characteristics and measures of their coding behavior. We find little evidence of a consistent substantial relationship between most expert characteristics and reliability, and these null results extend to potentially problematic sources of bias in estimates, such as gender. The exceptions to these results are intuitive, and provide baseline guidance for expert recruitment and retention in future expert coding projects: attentive and confident experts who have contextual knowledge tend to be more reliable. Taken as a whole, these findings reinforce arguments that item response theory models are a relatively safe method for aggregating expert-coded data.


2016 ◽  
Vol 37 (1) ◽  
pp. 85-128 ◽  
Author(s):  
Isabella Sulis ◽  
Michael D. Toland

Item response theory (IRT) models are the main psychometric approach for the development, evaluation, and refinement of multi-item instruments and scaling of latent traits, whereas multilevel models are the primary statistical method when considering the dependence between person responses when primary units (e.g., students) are nested within clusters (e.g., classes). This article introduces multilevel IRT (MLIRT) modeling, and provides the basic information to conduct, interpret, and report results based on an analysis using MLIRT modeling. The procedures are demonstrated using a sample data set based on the National Institute for the Evaluation of School System survey completed in Italy by fifth-grade students nested in classrooms to assess math achievement. The data and command files (Stata, M plus, flexMIRT) needed to reproduce all analyses and plots in this article are available as supplemental online materials at http://jea.sagepub.com/supplemental .


Author(s):  
Gomaa Said Mohamed Abdelhamid ◽  
Marwa Gomaa Abdelghani Bassiouni ◽  
Juana Gómez-Benito

Background: The Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) has been adapted to 28 different cultures and there has been considerable interest in examining its structure through exploratory and confirmatory factor analysis. This study investigates item and scale properties of the Egyptian WAIS-IV using item response theory (IRT) models. Methods: The sample consisted of 250 adults from Egypt. The item-level and subtest statistical properties of the Egyptian WAIS-IV were established using a combination of four dichotomous IRT models and four polytomous IRT models. In addition, factor analysis was performed to investigate the dimensionality of each subtest. Results: Factor analysis indicated the unidimensionality of each subtest. Among IRT models, the two-parameter logistic model provided a good fit for dichotomous subtests, while the graded response model fitted the polytomous data. Most items of the Egyptian WAIS-IV showed high discrimination, and the scale was adequately informative across the levels of latent traits (i.e., cognitive variables). However, each subtest included at least some items with limited ability to distinguish between individuals with differing levels of the cognitive variable being measured. Furthermore, most subtests have items that do not follow the difficulty rank they are ascribed in the WAIS-IV manual. Conclusions: Overall, the results suggest that the Egyptian WAIS-IV offers a highly valid assessment of intellectual abilities, despite the need for some improvements.


Sign in / Sign up

Export Citation Format

Share Document