Computer Adaptive Testing for the Assessment of Anomia Severity

2021 ◽  
Vol 42 (03) ◽  
pp. 180-191 ◽  
Author(s):  
Gerasimos Fergadiotis ◽  
Marianne Casilio ◽  
William D. Hula ◽  
Alexander Swiderski

AbstractAnomia assessment is a fundamental component of clinical practice and research inquiries involving individuals with aphasia, and confrontation naming tasks are among the most commonly used tools for quantifying anomia severity. While currently available confrontation naming tests possess many ideal properties, they are ultimately limited by the overarching psychometric framework they were developed within. Here, we discuss the challenges inherent to confrontation naming tests and present a modern alternative to test development called item response theory (IRT). Key concepts of IRT approaches are reviewed in relation to their relevance to aphasiology, highlighting the ability of IRT to create flexible and efficient tests that yield precise measurements of anomia severity. Empirical evidence from our research group on the application of IRT methods to a commonly used confrontation naming test is discussed, along with future avenues for test development.

2009 ◽  
Vol 15 (5) ◽  
pp. 758-768 ◽  
Author(s):  
OTTO PEDRAZA ◽  
NEILL R. GRAFF-RADFORD ◽  
GLENN E. SMITH ◽  
ROBERT J. IVNIK ◽  
FLOYD B. WILLIS ◽  
...  

AbstractScores on the Boston Naming Test (BNT) are frequently lower for African American when compared with Caucasian adults. Although demographically based norms can mitigate the impact of this discrepancy on the likelihood of erroneous diagnostic impressions, a growing consensus suggests that group norms do not sufficiently address or advance our understanding of the underlying psychometric and sociocultural factors that lead to between-group score discrepancies. Using item response theory and methods to detect differential item functioning (DIF), the current investigation moves beyond comparisons of the summed total score to examine whether the conditional probability of responding correctly to individual BNT items differs between African American and Caucasian adults. Participants included 670 adults age 52 and older who took part in Mayo’s Older Americans and Older African Americans Normative Studies. Under a two-parameter logistic item response theory framework and after correction for the false discovery rate, 12 items where shown to demonstrate DIF. Of these 12 items, 6 (“dominoes,” “escalator,” “muzzle,” “latch,” “tripod,” and “palette”) were also identified in additional analyses using hierarchical logistic regression models and represent the strongest evidence for race/ethnicity-based DIF. These findings afford a finer characterization of the psychometric properties of the BNT and expand our understanding of between-group performance. (JINS, 2009, 15, 758–768.)


2015 ◽  
Vol 58 (3) ◽  
pp. 865-877 ◽  
Author(s):  
Gerasimos Fergadiotis ◽  
Stacey Kellough ◽  
William D. Hula

Purpose In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Method Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. Results The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Conclusions Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.


2016 ◽  
Vol 29 (1) ◽  
Author(s):  
Cristian Zanon ◽  
Claudio S. Hutz ◽  
Hanwook Yoo ◽  
Ronald K. Hambleton

Assessment ◽  
2017 ◽  
Vol 25 (3) ◽  
pp. 360-373 ◽  
Author(s):  
Steve Balsis ◽  
Tabina K. Choudhury ◽  
Lisa Geraci ◽  
Jared F. Benge ◽  
Christopher J. Patrick

Alzheimer’s disease (AD) affects neurological, cognitive, and behavioral processes. Thus, to accurately assess this disease, researchers and clinicians need to combine and incorporate data across these domains. This presents not only distinct methodological and statistical challenges but also unique opportunities for the development and advancement of psychometric techniques. In this article, we describe relatively recent research using item response theory (IRT) that has been used to make progress in assessing the disease across its various symptomatic and pathological manifestations. We focus on applications of IRT to improve scoring, test development (including cross-validation and adaptation), and linking and calibration. We conclude by describing potential future multidimensional applications of IRT techniques that may improve the precision with which AD is measured.


2011 ◽  
Vol 21 (19pt20) ◽  
pp. 2736-2746 ◽  
Author(s):  
Roger Watson ◽  
L Andries van der Ark ◽  
Li-Chan Lin ◽  
Robert Fieo ◽  
Ian J Deary ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document