Commentary: Matching IRT Models to PRO Constructs— Modeling Alternatives, and Some Thoughts on What Makes a Model Different

Psychometrika ◽  
2021 ◽  
Author(s):  
Matthias von Davier
Keyword(s):  
2017 ◽  
Vol 33 (3) ◽  
pp. 181-189 ◽  
Author(s):  
Christoph J. Kemper ◽  
Michael Hock

Abstract. Anxiety Sensitivity (AS) denotes the tendency to fear anxiety-related sensations. Trait AS is an established risk factor for anxiety pathology. The Anxiety Sensitivity Index-3 (ASI-3) is a widely used measure of AS and its three most robust dimensions with well-established construct validity. At present, the dimensional conceptualization of AS, and thus, the construct validity of the ASI-3 is challenged. A latent class structure with two distinct and qualitatively different forms, an adaptive form (normative AS) and a maladaptive form (AS taxon, predisposing for anxiety pathology) was postulated. Item Response Theory (IRT) models were applied to item-level data of the ASI-3 in an attempt to replicate previous findings in a large nonclinical sample (N = 2,603) and to examine possible interpretations for the latent discontinuity observed. Two latent classes with a pattern of distinct responses to ASI-3 items were found. However, classes were indicative of participant’s differential use of the response scale (midpoint and extreme response style) rather than differing in AS content (adaptive and maladaptive AS forms). A dimensional structure of AS and the construct validity of the ASI-3 was supported.


Death Studies ◽  
2021 ◽  
pp. 1-11
Author(s):  
Tomás Caycho-Rodríguez ◽  
Lindsey W. Vilca ◽  
Carlos Carbajal-León ◽  
José Heredia-Mongrut ◽  
Miguel Gallegos ◽  
...  
Keyword(s):  

2021 ◽  
Vol 117 ◽  
pp. 106849
Author(s):  
Danilo Carrozzino ◽  
Kaj Sparle Christensen ◽  
Giovanni Mansueto ◽  
Fiammetta Cosci

2002 ◽  
Vol 26 (3) ◽  
pp. 302-320 ◽  
Author(s):  
Els De Koning ◽  
Klaas Sijtsma ◽  
Jo H. M. Hamers

2021 ◽  
Author(s):  
Masaki Uto

AbstractPerformance assessment, in which human raters assess examinee performance in a practical task, often involves the use of a scoring rubric consisting of multiple evaluation items to increase the objectivity of evaluation. However, even when using a rubric, assigned scores are known to depend on characteristics of the rubric’s evaluation items and the raters, thus decreasing ability measurement accuracy. To resolve this problem, item response theory (IRT) models that can estimate examinee ability while considering the effects of these characteristics have been proposed. These IRT models assume unidimensionality, meaning that a rubric measures one latent ability. In practice, however, this assumption might not be satisfied because a rubric’s evaluation items are often designed to measure multiple sub-abilities that constitute a targeted ability. To address this issue, this study proposes a multidimensional IRT model for rubric-based performance assessment. Specifically, the proposed model is formulated as a multidimensional extension of a generalized many-facet Rasch model. Moreover, a No-U-Turn variant of the Hamiltonian Markov chain Monte Carlo algorithm is adopted as a parameter estimation method for the proposed model. The proposed model is useful not only for improving the ability measurement accuracy, but also for detailed analysis of rubric quality and rubric construct validity. The study demonstrates the effectiveness of the proposed model through simulation experiments and application to real data.


2021 ◽  
Vol 8 (3) ◽  
pp. 672-695
Author(s):  
Thomas DeVaney

This article presents a discussion and illustration of Mokken scale analysis (MSA), a nonparametric form of item response theory (IRT), in relation to common IRT models such as Rasch and Guttman scaling. The procedure can be used for dichotomous and ordinal polytomous data commonly used with questionnaires. The assumptions of MSA are discussed as well as characteristics that differentiate a Mokken scale from a Guttman scale. MSA is illustrated using the mokken package with R Studio and a data set that included over 3,340 responses to a modified version of the Statistical Anxiety Rating Scale. Issues addressed in the illustration include monotonicity, scalability, and invariant ordering. The R script for the illustration is included.


Sign in / Sign up

Export Citation Format

Share Document