scholarly journals A-08 Examination of Three Functional Living Scales Using Item Response Theory Modeling in a Mixed Sample

2020 ◽  
Vol 35 (6) ◽  
pp. 781-781
Author(s):  
W Goette ◽  
A Carlew ◽  
J Schaffert ◽  
H Rossetti ◽  
L Lacritz

Abstract Objective Characterize three functional living scales under item response theory and examine these scales for evidence of differential item functioning (DIF) by participant and/or informant ethnicity and education. Method Baseline data from 3155 participants [Mage = 70.59(9.55); Medu = 13.3(4.26); 61.72%female] enrolled in the Texas Alzheimer’s Research and Care Consortium with data from the Clinical Dementia Rating Scale (CDR; functional items), Physical Self-Maintenance Scale (PSMS), and Instrumental Activities of Daily Living Scale (IADL) were used. The sample was predominately white (93.94%) and 35.97% identified as Hispanic. Graded response models fit all three tests best. DIF was examined by iteratively dropping item-by-item constraints and then testing model fit. Results The CDR demonstrated overall good item functioning with clear separation between all of the rating categories for each item, while the PSMS and IADL did not, suggesting the item ratings should be reconsidered. DIF was observed by ethnicity (Hispanic v. non-Hispanic) and education (separated into low, average, high) for every item on all three scales (all ps ≤ .01 after adjustment for multiple observations). Hispanic ethnicity and higher education subjects were more likely to be rated as more impaired. Conclusions Results suggest these three commonly used functional scales have DIF depending on the ethnicity and education of the patient. This finding has implications for understanding functional change in certain populations, particularly the potential for mischaracterization of impairment in minority samples. The finding that individuals with higher education tended to be rated as more functionally impaired warrants further investigation.

2011 ◽  
Vol 32 (5) ◽  
pp. 362-366 ◽  
Author(s):  
Tyler M. Miller ◽  
Steve Balsis ◽  
Deborah A. Lowe ◽  
Jared F. Benge ◽  
Rachelle S. Doody

2021 ◽  
Vol 8 (3) ◽  
pp. 672-695
Author(s):  
Thomas DeVaney

This article presents a discussion and illustration of Mokken scale analysis (MSA), a nonparametric form of item response theory (IRT), in relation to common IRT models such as Rasch and Guttman scaling. The procedure can be used for dichotomous and ordinal polytomous data commonly used with questionnaires. The assumptions of MSA are discussed as well as characteristics that differentiate a Mokken scale from a Guttman scale. MSA is illustrated using the mokken package with R Studio and a data set that included over 3,340 responses to a modified version of the Statistical Anxiety Rating Scale. Issues addressed in the illustration include monotonicity, scalability, and invariant ordering. The R script for the illustration is included.


2011 ◽  
Vol 35 (8) ◽  
pp. 604-622 ◽  
Author(s):  
Hirotaka Fukuhara ◽  
Akihito Kamata

A differential item functioning (DIF) detection method for testlet-based data was proposed and evaluated in this study. The proposed DIF model is an extension of a bifactor multidimensional item response theory (MIRT) model for testlets. Unlike traditional item response theory (IRT) DIF models, the proposed model takes testlet effects into account, thus estimating DIF magnitude appropriately when a test is composed of testlets. A fully Bayesian estimation method was adopted for parameter estimation. The recovery of parameters was evaluated for the proposed DIF model. Simulation results revealed that the proposed bifactor MIRT DIF model produced better estimates of DIF magnitude and higher DIF detection rates than the traditional IRT DIF model for all simulation conditions. A real data analysis was also conducted by applying the proposed DIF model to a statewide reading assessment data set.


Author(s):  
Alexandra Foubert-Samier ◽  
Anne Pavy-Le Traon ◽  
Tiphaine Saulnier ◽  
Mélanie Le-Goff ◽  
Margherita Fabbri ◽  
...  

2001 ◽  
Vol 27 (2) ◽  
Author(s):  
Pieter Schaap

The objective of this article is to present the results of an investigation into the item and test characteristics of two tests of the Potential Index Batteries (PIB) in terms of differential item functioning (DIP) and the effect thereof on test scores of different race groups. The English Vocabulary (Index 12) and Spelling Tests (Index 22) of the PIB were analysed for white, black and coloured South Africans. Item response theory (IRT) methods were used to identify items which function differentially for white, black and coloured race groups. Opsomming Die doel van hierdie artikel is om die resultate van n ondersoek na die item- en toetseienskappe van twee PIB (Potential Index Batteries) toetse in terme van itemsydigheid en die invloed wat dit op die toetstellings van rassegroepe het, weer te gee. Die Potential Index Batteries (PIB) se Engelse Woordeskat (Index 12) en Spellingtoetse (Index 22) is ten opsigte van blanke, swart en gekleurde Suid-Afrikaners ontleed. Itemresponsteorie (IRT) is gebruik om items te identifiseer wat as sydig (DIP) vir die onderskeie rassegroepe beskou kan word.


Sign in / Sign up

Export Citation Format

Share Document