scholarly journals Item Response Theory, Computerized Adaptive Testing, and PROMIS: Assessment of Physical Function

2013 ◽  
Vol 41 (1) ◽  
pp. 153-158 ◽  
Author(s):  
James F. Fries ◽  
James Witter ◽  
Matthias Rose ◽  
David Cella ◽  
Dinesh Khanna ◽  
...  

Objective.Patient-reported outcome (PRO) questionnaires record health information directly from research participants because observers may not accurately represent the patient perspective. Patient-reported Outcomes Measurement Information System (PROMIS) is a US National Institutes of Health cooperative group charged with bringing PRO to a new level of precision and standardization across diseases by item development and use of item response theory (IRT).Methods.With IRT methods, improved items are calibrated on an underlying concept to form an item bank for a “domain” such as physical function (PF). The most informative items can be combined to construct efficient “instruments” such as 10-item or 20-item PF static forms. Each item is calibrated on the basis of the probability that a given person will respond at a given level, and the ability of the item to discriminate people from one another. Tailored forms may cover any desired level of the domain being measured. Computerized adaptive testing (CAT) selects the best items to sharpen the estimate of a person’s functional ability, based on prior responses to earlier questions. PROMIS item banks have been improved with experience from several thousand items, and are calibrated on over 21,000 respondents.Results.In areas tested to date, PROMIS PF instruments are superior or equal to Health Assessment Questionnaire and Medical Outcome Study Short Form-36 Survey legacy instruments in clarity, translatability, patient importance, reliability, and sensitivity to change.Conclusion.Precise measures, such as PROMIS, efficiently incorporate patient self-report of health into research, potentially reducing research cost by lowering sample size requirements. The advent of routine IRT applications has the potential to transform PRO measurement.

Psychometrika ◽  
2021 ◽  
Author(s):  
Ron D. Hays ◽  
Karen L. Spritzer ◽  
Steven P. Reise

AbstractThe reliable change index has been used to evaluate the significance of individual change in health-related quality of life. We estimate reliable change for two measures (physical function and emotional distress) in the Patient-Reported Outcomes Measurement Information System (PROMIS®) 29-item health-related quality of life measure (PROMIS-29 v2.1). Using two waves of data collected 3 months apart in a longitudinal observational study of chronic low back pain and chronic neck pain patients receiving chiropractic care, and simulations, we compare estimates of reliable change from classical test theory fixed standard errors with item response theory standard errors from the graded response model. We find that unless true change in the PROMIS physical function and emotional distress scales is substantial, classical test theory estimates of significant individual change are much more optimistic than estimates of change based on item response theory.


2017 ◽  
Vol 24 (5) ◽  
pp. 897-902 ◽  
Author(s):  
Scott Morris ◽  
Mike Bass ◽  
Mirinae Lee ◽  
Richard E Neapolitan

Abstract Objective: The Patient Reported Outcomes Measurement Information System (PROMIS) initiative developed an array of patient reported outcome (PRO) measures. To reduce the number of questions administered, PROMIS utilizes unidimensional item response theory and unidimensional computer adaptive testing (UCAT), which means a separate set of questions is administered for each measured trait. Multidimensional item response theory (MIRT) and multidimensional computer adaptive testing (MCAT) simultaneously assess correlated traits. The objective was to investigate the extent to which MCAT reduces patient burden relative to UCAT in the case of PROs. Methods: One MIRT and 3 unidimensional item response theory models were developed using the related traits anxiety, depression, and anger. Using these models, MCAT and UCAT performance was compared with simulated individuals. Results: Surprisingly, the root mean squared error for both methods increased with the number of items. These results were driven by large errors for individuals with low trait levels. A second analysis focused on individuals aligned with item content. For these individuals, both MCAT and UCAT accuracies improved with additional items. Furthermore, MCAT reduced the test length by 50%. Discussion: For the PROMIS Emotional Distress banks, neither UCAT nor MCAT provided accurate estimates for individuals at low trait levels. Because the items in these banks were designed to detect clinical levels of distress, there is little information for individuals with low trait values. However, trait estimates for individuals targeted by the banks were accurate and MCAT asked substantially fewer questions. Conclusion: By reducing the number of items administered, MCAT can allow clinicians and researchers to assess a wider range of PROs with less patient burden.


2017 ◽  
Vol 118 (5) ◽  
pp. 383-391 ◽  
Author(s):  
Josh B. Kazman ◽  
Jonathan M. Scott ◽  
Patricia A. Deuster

AbstractThe limitations for self-reporting of dietary patterns are widely recognised as a major vulnerability of FFQ and the dietary screeners/scales derived from FFQ. Such instruments can yield inconsistent results to produce questionable interpretations. The present article discusses the value of psychometric approaches and standards in addressing these drawbacks for instruments used to estimate dietary habits and nutrient intake. We argue that a FFQ or screener that treats diet as a ‘latent construct’ can be optimised for both internal consistency and the value of the research results. Latent constructs, a foundation for item response theory (IRT)-based scales (e.g. Patient Reported Outcomes Measurement Information System) are typically introduced in the design stage of an instrument to elicit critical factors that cannot be observed or measured directly. We propose an iterative approach that uses such modelling to refine FFQ and similar instruments. To that end, we illustrate the benefits of psychometric modelling by using items and data from a sample of 12 370 Soldiers who completed the 2012 US Army Global Assessment Tool (GAT). We used factor analysis to build the scale incorporating five out of eleven survey items. An IRT-driven assessment of response category properties indicates likely problems in the ordering or wording of several response categories. Group comparisons, examined with differential item functioning (DIF), provided evidence of scale validity across each Army sub-population (sex, service component and officer status). Such an approach holds promise for future FFQ.


PeerJ ◽  
2021 ◽  
Vol 9 ◽  
pp. e12100
Author(s):  
Marco Tullio Liuzza ◽  
Rocco Spagnuolo ◽  
Gabriella Antonucci ◽  
Rosa Daniela Grembiale ◽  
Cristina Cosco ◽  
...  

Background There has recently been growing interest in the roles of inflammation in contributing to the development of anxiety in people with immune-mediated inflammatory diseases (IMID). Patient-reported outcome measures can facilitate the assessment of physical and psychological functioning. The National Institutes of Health (NIH)’s Patient-Reported Outcomes Measurement Information System (PROMIS®) is a set of Patient-Reported Outcomes (PROs) that cover physical appearance, mental health, and social health. The PROMIS has been built through an Item Response Theory approach (IRT), a model-based measurement in which trait level estimates depend on both persons’ responses and on the properties of the items that were administered. The aim of this study is to test the psychometric properties of an Italian custom four-item Short Form of the PROMIS Anxiety item bank in a cohort of outpatients with IMIDs. Methods We selected four items from the Italian standard Short Form Anxiety 8a and administered them to consecutive outpatients affected by Inflammatory Bowel disease (n = 246), rheumatological (n = 100) and dermatological (n = 43) diseases, and healthy volunteers (n = 280). Data was analyzed through an Item Response Theory (IRT) analysis in order to evaluate the psychometric properties of the Italian adaptation of the PROMIS anxiety short form. Results Taken together, Confirmatory Factor Analysis and Exploratory Factor analysis suggest that the unidimensionality assumption of the instrument holds. The instrument has excellent reliability from a Classical Theory of Test (CTT) standpoint (Cronbach’s α = 0.93, McDonald’s ω = 0.92). The 2PL Graded Response Model (GRM) model provided showed a better goodness of fit as compared to the 1PL GRM model, and local independence assumption appears to be met overall. We did not find signs of differential item functioning (DIF) for age and gender, but evidence for uniform (but not non-uniform) DIF was found in three out of four items for the patient vs. control group. Analysis of the test reliability curve suggested that the instrument is most reliable for higher levels of the latent trait of anxiety. The groups of patients exhibited higher levels of anxiety as compared to the control group (ps < 0.001, Bonferroni-corrected). The groups of patients were not different between themselves (p = 1, Bonferroni-corrected). T-scores based on estimated latent trait and raw scores were highly correlated (Pearson’s r = 0.98) and led to similar results. Discussion The Italian custom four-item short form from the PROMIS anxiety form 8a shows acceptable psychometric properties both from a CTT and an IRT standpoint. The Test Reliability Curve shows that this instrument is mostly informative for people with higher levels of anxiety, making it particularly suitable for clinical populations such as IMID patients.


2012 ◽  
Vol 17 (1-2) ◽  
pp. 61-68
Author(s):  
Ryszard Gmoch

Abstract New trends relating to computer-based testing of learners’ achievements are presented in the paper. It describes adaptive testing methods and results of studies in this problem area. Essential questions connected with the Item Response Theory (IRT) were also discussed. The presented data indicate that computer-based adaptive testing should be popularized in Poland to its fullest extent.


Sign in / Sign up

Export Citation Format

Share Document