Item Response Theory: A Useful Test Theory for Adapted Physical Education

1991 ◽  
Vol 8 (4) ◽  
pp. 317-332 ◽  
Author(s):  
Emily Cole ◽  
Terry M. Wood ◽  
John M. Dunn

Tests constructed using item response theory (IRT) produce invariant item and test parameters, making it possible to construct tests and test items useful over many populations. This paper heuristically and empirically compares the utility of classical test theory (CTT) and IRT using psychomotor skill data. Data from the Test of Gross Motor Development (TGMD) (Ulrich, 1985) were used to assess the feasibility of fitting existing IRT models to dichotomously scored psychomotor skill data. As expected, CTT and IRT analyses yielded parallel interpretations of item and subtest difficulty and discrimination. However, IRT provided significant additional analysis of the error associated with estimating examinee ability. The IRT two-parameter logistic model provided a superior model fit to the one-parameter logistic model. Although both TGMD subtests estimated ability for examinees of low to average ability, the object control subtest estimated examinee ability more precisely at higher difficulty levels than the locomotor subtest. The results suggest that IRT is particularly well suited to construct tests that can meet the challenging measurement demands of adapted physical education.

Psychometrika ◽  
2021 ◽  
Author(s):  
Ron D. Hays ◽  
Karen L. Spritzer ◽  
Steven P. Reise

AbstractThe reliable change index has been used to evaluate the significance of individual change in health-related quality of life. We estimate reliable change for two measures (physical function and emotional distress) in the Patient-Reported Outcomes Measurement Information System (PROMIS®) 29-item health-related quality of life measure (PROMIS-29 v2.1). Using two waves of data collected 3 months apart in a longitudinal observational study of chronic low back pain and chronic neck pain patients receiving chiropractic care, and simulations, we compare estimates of reliable change from classical test theory fixed standard errors with item response theory standard errors from the graded response model. We find that unless true change in the PROMIS physical function and emotional distress scales is substantial, classical test theory estimates of significant individual change are much more optimistic than estimates of change based on item response theory.


Author(s):  
Mehmet Barış Horzum ◽  
Gülden Kaya Uyanik

The aim of this study is to examine validity and reliability of Community of Inquiry Scale commonly used in online learning by the means of Item Response Theory. For this purpose, Community of Inquiry Scale version 14 is applied on 1,499 students of a distance education center’s online learning programs at a Turkish state university via internet. The collected data is analyzed by using a statistical software package. Research data is analyzed in three aspects, which are checking model assumptions, checking model-data fit and item analysis. Item and test features of the scale are examined by the means of Graded Response Theory. In order to use this model of IRT, after testing the assumptions out of the data gathered from 1,499 participants, data model compliance was examined. Following the affirmative results gathered from the examinations, all data is analyzed by using GRM. As a result of the study, the Community of Inquiry Scale adapted to Turkish by Horzum (in press) is found to be reliable and valid by the means of Classical Test Theory and Item Response Theory.


Sign in / Sign up

Export Citation Format

Share Document