Holonomic function of 2 parameter logistic model item response theory parameter estimation

Author(s):  
Kazuhisa Noguchi ◽  
Eisuke Ito
2015 ◽  
Vol 58 (3) ◽  
pp. 865-877 ◽  
Author(s):  
Gerasimos Fergadiotis ◽  
Stacey Kellough ◽  
William D. Hula

Purpose In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Method Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. Results The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Conclusions Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.


1991 ◽  
Vol 8 (4) ◽  
pp. 317-332 ◽  
Author(s):  
Emily Cole ◽  
Terry M. Wood ◽  
John M. Dunn

Tests constructed using item response theory (IRT) produce invariant item and test parameters, making it possible to construct tests and test items useful over many populations. This paper heuristically and empirically compares the utility of classical test theory (CTT) and IRT using psychomotor skill data. Data from the Test of Gross Motor Development (TGMD) (Ulrich, 1985) were used to assess the feasibility of fitting existing IRT models to dichotomously scored psychomotor skill data. As expected, CTT and IRT analyses yielded parallel interpretations of item and subtest difficulty and discrimination. However, IRT provided significant additional analysis of the error associated with estimating examinee ability. The IRT two-parameter logistic model provided a superior model fit to the one-parameter logistic model. Although both TGMD subtests estimated ability for examinees of low to average ability, the object control subtest estimated examinee ability more precisely at higher difficulty levels than the locomotor subtest. The results suggest that IRT is particularly well suited to construct tests that can meet the challenging measurement demands of adapted physical education.


Sign in / Sign up

Export Citation Format

Share Document