scholarly journals Item Response Theory Analyses of Barkley’s Adult Attention-Deficit/Hyperactivity Disorder Rating Scales

2020 ◽  
Vol 35 (7) ◽  
pp. 1094-1108
Author(s):  
Morgan E Nitta ◽  
Brooke E Magnus ◽  
Paul S Marshall ◽  
James B Hoelzle

Abstract There are many challenges associated with assessment and diagnosis of ADHD in adulthood. Utilizing the graded response model (GRM) from item response theory (IRT), a comprehensive item-level analysis of adult ADHD rating scales in a clinical population was conducted with Barkley's Adult ADHD Rating Scale-IV, Self-Report of Current Symptoms (CSS), a self-report diagnostic checklist and a similar self-report measure quantifying retrospective report of childhood symptoms, Barkley's Adult ADHD Rating Scale-IV, Self-Report of Childhood Symptoms (BAARS-C). Differences in item functioning were also considered after identifying and excluding individuals with suspect effort. Items associated with symptoms of inattention (IA) and hyperactivity/impulsivity (H/I) are endorsed differently across the lifespan, and these data suggest that they vary in their relationship to the theoretical constructs of IA and H/I. Screening for sufficient effort did not meaningfully change item level functioning. The application IRT to direct item-to-symptom measures allows for a unique psychometric assessment of how the current DSM-5 symptoms represent latent traits of IA and H/I. Meeting a symptom threshold of five or more symptoms may be misleading. Closer attention given to specific symptoms in the context of the clinical interview and reported difficulties across domains may lead to more informed diagnosis.

2020 ◽  
Vol 10 (11) ◽  
pp. 173
Author(s):  
Paul J. Silvia ◽  
Rebekah M. Rodriguez

The Humor Styles Questionnaire (HSQ) is one of the most popular self-report scales in humor research. The present research conducted a forward-looking psychometric analysis grounded in Rasch and item response theory models, which have not been applied to the HSQ thus far. Regarding strengths, the analyses found very good evidence for reliability and dimensionality and essentially zero gender-based differential item functioning, indicating no gender bias in the items. Regarding opportunities for future development, the analyses suggested that (1) the seven-point rating scale performs poorly relative to a five-point scale; (2) the affiliative subscale is far too easy to endorse and much easier than the other subscales; (3) the four subscales show problematic variation in their readability and proportion of reverse-scored items; and (4) a handful of items with poor discrimination and high local dependence are easy targets for scale revision. Taken together, the findings suggest that the HSQ, as it nears the two-decade mark, has many strengths but would benefit from light remodeling.


2020 ◽  
Author(s):  
Paul Silvia ◽  
Rebekah Rodriguez

The Humor Styles Questionnaire (HSQ) is one of the most popular self-report scales in humor research. The present research conducted a forward-looking psychometric analysis grounded in Rasch and item response theory models, which have not been applied to the HSQ thus far. Regarding strengths, the analyses found very good evidence for reliability and dimensionality and essentially zero gender-based differential item functioning, indicating no gender bias in the items. Regarding opportunities for future development, the analyses suggested that (1) the 7-point rating scale performs poorly relative to a 5-point scale; (2) the affiliative subscale is far too easy to endorse and much easier than the other subscales; (3) the four subscales show problematic variation in their readability and proportion of reverse-scored items; and (4) a handful of items with poor discrimination and high local dependence are easy targets for scale revision. Taken together, the findings suggest that the HSQ, as it nears the two-decade mark, has many strengths but would benefit from light remodeling.


2021 ◽  
Vol 8 (3) ◽  
pp. 672-695
Author(s):  
Thomas DeVaney

This article presents a discussion and illustration of Mokken scale analysis (MSA), a nonparametric form of item response theory (IRT), in relation to common IRT models such as Rasch and Guttman scaling. The procedure can be used for dichotomous and ordinal polytomous data commonly used with questionnaires. The assumptions of MSA are discussed as well as characteristics that differentiate a Mokken scale from a Guttman scale. MSA is illustrated using the mokken package with R Studio and a data set that included over 3,340 responses to a modified version of the Statistical Anxiety Rating Scale. Issues addressed in the illustration include monotonicity, scalability, and invariant ordering. The R script for the illustration is included.


Author(s):  
Alexandra Foubert-Samier ◽  
Anne Pavy-Le Traon ◽  
Tiphaine Saulnier ◽  
Mélanie Le-Goff ◽  
Margherita Fabbri ◽  
...  

2020 ◽  
Vol 35 (6) ◽  
pp. 781-781
Author(s):  
W Goette ◽  
A Carlew ◽  
J Schaffert ◽  
H Rossetti ◽  
L Lacritz

Abstract Objective Characterize three functional living scales under item response theory and examine these scales for evidence of differential item functioning (DIF) by participant and/or informant ethnicity and education. Method Baseline data from 3155 participants [Mage = 70.59(9.55); Medu = 13.3(4.26); 61.72%female] enrolled in the Texas Alzheimer’s Research and Care Consortium with data from the Clinical Dementia Rating Scale (CDR; functional items), Physical Self-Maintenance Scale (PSMS), and Instrumental Activities of Daily Living Scale (IADL) were used. The sample was predominately white (93.94%) and 35.97% identified as Hispanic. Graded response models fit all three tests best. DIF was examined by iteratively dropping item-by-item constraints and then testing model fit. Results The CDR demonstrated overall good item functioning with clear separation between all of the rating categories for each item, while the PSMS and IADL did not, suggesting the item ratings should be reconsidered. DIF was observed by ethnicity (Hispanic v. non-Hispanic) and education (separated into low, average, high) for every item on all three scales (all ps ≤ .01 after adjustment for multiple observations). Hispanic ethnicity and higher education subjects were more likely to be rated as more impaired. Conclusions Results suggest these three commonly used functional scales have DIF depending on the ethnicity and education of the patient. This finding has implications for understanding functional change in certain populations, particularly the potential for mischaracterization of impairment in minority samples. The finding that individuals with higher education tended to be rated as more functionally impaired warrants further investigation.


2020 ◽  
Vol 35 (6) ◽  
pp. 790-790
Author(s):  
W Goette ◽  
A Carlew ◽  
J Schaffert ◽  
H Rossetti ◽  
L Lacritz

Abstract Objective Examine prediction of functional ability with neuropsychological tests using latent item response theory. Method The sample included 3155 individuals (Mage = 69.72, SD = 9.41; Median education =13.15, SD = 4.40; white = 92.81%; female = 62.03%; MCI = 25.13%; Dementia = 28.87%) from the Texas Alzheimer’s Research and Care Consortium who completed functional and cognitive assessments [Mini Mental State Examination (MMSE), Logical Memory (LM), Visual Reproduction (VR), Controlled Oral Word Association Test (COWAT), Trail Making Test (TMT), Boston Naming Test, and Digit Span]. Functional measures [Clinical Dementia Rating Scale, Physical Self Maintenance Scale, and Instrumental Activities of Daily Living)] were combined into a single outcome variable using confirmatory factor analysis. Item response theory (IRT) was used to fit the data, and latent regression to predict the latent trait score using neuropsychological data. Results All three functional scales loaded onto a single factor and demonstrated good construct coverage and measurement reliability (Supporting Figure). A graded response IRT model best fit the functional ability composite measure. MMSE (b = −1.08, p < .001), LM II (b = −0.58, p < .001), VR I and II (b = −0.09, p = .02 and b = −0.43, p < .001, respectively), COWAT (b = −0.10, p = .003), and TMT-B (b = −0.30, p < .001) all significantly predicted functional abilities, as did age (b = 0.61, p < .001) and education (b = 0.31, p < .001). Conclusions Global cognition, memory and executive function tests predicted functional abilities while attention and language tasks did not. These results suggest that certain neuropsychological tests meaningfully predict functional abilities in elderly cognitively normal and cognitively impaired individuals. Further research is needed to determine whether these cognitive domains are predictive of functional abilities in other clinical disorders.


Sign in / Sign up

Export Citation Format

Share Document