An IRT Analysis of the Personal Optimism Scale

2008 ◽  
Vol 24 (1) ◽  
pp. 49-56 ◽  
Author(s):  
Wolfgang A. Rauch ◽  
Karl Schweizer ◽  
Helfried Moosbrugger

Abstract. In this study the psychometric properties of the Personal Optimism scale of the POSO-E questionnaire ( Schweizer & Koch, 2001 ) for the assessment of dispositional optimism are evaluated by applying Samejima's (1969) graded response model, a parametric item response theory (IRT) model for polytomous data. Model fit is extensively evaluated via fit checks on the lower-order margins of the contingency table of observed and expected responses and visual checks of fit plots comparing observed and expected category response functions. The model proves appropriate for the data; a small amount of misfit is interpreted in terms of previous research using other measures for optimism. Item parameters and information functions show that optimism can be measured accurately, especially at moderately low to middle levels of the latent trait scale, and particularly by the negatively worded items.

2018 ◽  
Vol 79 (3) ◽  
pp. 545-557 ◽  
Author(s):  
Dimiter M. Dimitrov ◽  
Yong Luo

An approach to scoring tests with binary items, referred to as D-scoring method, was previously developed as a classical analog to basic models in item response theory (IRT) for binary items. As some tests include polytomous items, this study offers an approach to D-scoring of such items and parallels the results with those obtained under the graded response model (GRM) for ordered polytomous items in the framework of IRT. The proposed design of using D-scoring with “virtual” binary items generated from polytomous items provides (a) ability scores that are consistent with their GRM counterparts and (b) item category response functions analogous to those obtained under the GRM. This approach provides a unified framework for D-scoring and psychometric analysis of tests with binary and/or polytomous items that can be efficient in different scenarios of educational and psychological assessment.


2017 ◽  
Vol 78 (3) ◽  
pp. 384-408 ◽  
Author(s):  
Yong Luo ◽  
Hong Jiao

Stan is a new Bayesian statistical software program that implements the powerful and efficient Hamiltonian Monte Carlo (HMC) algorithm. To date there is not a source that systematically provides Stan code for various item response theory (IRT) models. This article provides Stan code for three representative IRT models, including the three-parameter logistic IRT model, the graded response model, and the nominal response model. We demonstrate how IRT model comparison can be conducted with Stan and how the provided Stan code for simple IRT models can be easily extended to their multidimensional and multilevel cases.


2021 ◽  
pp. 014662162110131
Author(s):  
Leah Feuerstahler ◽  
Mark Wilson

In between-item multidimensional item response models, it is often desirable to compare individual latent trait estimates across dimensions. These comparisons are only justified if the model dimensions are scaled relative to each other. Traditionally, this scaling is done using approaches such as standardization—fixing the latent mean and standard deviation to 0 and 1 for all dimensions. However, approaches such as standardization do not guarantee that Rasch model properties hold across dimensions. Specifically, for between-item multidimensional Rasch family models, the unique ordering of items holds within dimensions, but not across dimensions. Previously, Feuerstahler and Wilson described the concept of scale alignment, which aims to enforce the unique ordering of items across dimensions by linearly transforming item parameters within dimensions. In this article, we extend the concept of scale alignment to the between-item multidimensional partial credit model and to models fit using incomplete data. We illustrate this method in the context of the Kindergarten Individual Development Survey (KIDS), a multidimensional survey of kindergarten readiness used in the state of Illinois. We also present simulation results that demonstrate the effectiveness of scale alignment in the context of polytomous item response models and missing data.


Assessment ◽  
2016 ◽  
Vol 23 (6) ◽  
pp. 655-671 ◽  
Author(s):  
James J. Li ◽  
Steven P. Reise ◽  
Andrea Chronis-Tuscano ◽  
Amori Yee Mikami ◽  
Steve S. Lee

Item response theory (IRT) was separately applied to parent- and teacher-rated symptoms of attention-deficit/hyperactivity disorder (ADHD) from a pooled sample of 526 six- to twelve-year-old children with and without ADHD. The dimensional structure ADHD was first examined using confirmatory factor analyses, including the bifactor model. A general ADHD factor and two group factors, representing inattentive and hyperactive/impulsive dimensions, optimally fit the data. Using the graded response model, we estimated discrimination and location parameters and information functions for all 18 symptoms of ADHD. Parent- and teacher-rated symptoms demonstrated adequate discrimination and location values, although these estimates varied substantially. For parent ratings, the test information curve peaked between −2 and +2 SD, suggesting that ADHD symptoms exhibited excellent overall reliability at measuring children in the low to moderate range of the general ADHD factor, but not in the extreme ranges. Similar results emerged for teacher ratings, in which the peak range of measurement precision was from −1.40 to 1.90 SD. Several symptoms were comparatively more informative than others; for example, is often easily distracted (“Distracted”) was the most informative parent- and teacher-rated symptom across the latent trait continuum. Clinical implications for the assessment of ADHD as well as relevant considerations for future revisions to diagnostic criteria are discussed.


2020 ◽  
Vol 44 (6) ◽  
pp. 465-481
Author(s):  
Carl F. Falk

We present a monotonic polynomial graded response (GRMP) model that subsumes the unidimensional graded response model for ordered categorical responses and results in flexible category response functions. We suggest improvements in the parameterization of the polynomial underlying similar models, expand upon an underlying response variable derivation of the model, and in lieu of an overall discrimination parameter we propose an index to aid in interpreting the strength of relationship between the latent variable and underlying item responses. In applications, the GRMP is compared to two approaches: (a) a previously developed monotonic polynomial generalized partial credit (GPCMP) model; and (b) logistic and probit variants of the heteroscedastic graded response (HGR) model that we estimate using maximum marginal likelihood with the expectation–maximization algorithm. Results suggest that the GRMP can fit real data better than the GPCMP and the probit variant of the HGR, but is slightly outperformed by the logistic HGR. Two simulation studies compared the ability of the GRMP and logistic HGR to recover category response functions. While the GRMP showed some ability to recover HGR response functions and those based on kernel smoothing, the HGR was more specific in the types of response functions it could recover. In general, the GRMP and HGR make different assumptions regarding the underlying response variables, and can result in different category response function shapes.


Author(s):  
Cai Xu ◽  
Mark V. Schaverien ◽  
Joani M. Christensen ◽  
Chris J. Sidey-Gibbons

Abstract Purpose This study aimed to evaluate and improve the accuracy and efficiency of the QuickDASH for use in assessment of limb function in patients with upper extremity lymphedema using modern psychometric techniques. Method We conducted confirmative factor analysis (CFA) and Mokken analysis to examine the assumption of unidimensionality for IRT model on data from 285 patients who completed the QuickDASH, and then fit the data to Samejima’s graded response model (GRM) and assessed the assumption of local independence of items and calibrated the item responses for CAT simulation. Results Initial CFA and Mokken analyses demonstrated good scalability of items and unidimensionality. However, the local independence of items assumption was violated between items 9 (severity of pain) and 11 (sleeping difficulty due to pain) (Yen’s Q3 = 0.46) and disordered thresholds were evident for item 5 (cutting food). After addressing these breaches of assumptions, the re-analyzed GRM with the remaining 10 items achieved an improved fit. Simulation of CAT administration demonstrated a high correlation between scores on the CAT and the QuickDash (r = 0.98). Items 2 (doing heavy chores) and 8 (limiting work or daily activities) were the most frequently used. The correlation among factor scores derived from the QuickDASH version with 11 items and the Ultra-QuickDASH version with items 2 and 8 was as high as 0.91. Conclusion By administering just these two best performing QuickDash items we can obtain estimates that are very similar to those obtained from the full-length QuickDash without the need for CAT technology.


Sign in / Sign up

Export Citation Format

Share Document