nonparametric item response
Recently Published Documents


TOTAL DOCUMENTS

46
(FIVE YEARS 7)

H-INDEX

13
(FIVE YEARS 1)

2021 ◽  
pp. 001316442110142
Author(s):  
Carl F. Falk ◽  
Leah M. Feuerstahler

Large-scale assessments often use a computer adaptive test (CAT) for selection of items and for scoring respondents. Such tests often assume a parametric form for the relationship between item responses and the underlying construct. Although semi- and nonparametric response functions could be used, there is scant research on their performance in a CAT. In this work, we compare parametric response functions versus those estimated using kernel smoothing and a logistic function of a monotonic polynomial. Monotonic polynomial items can be used with traditional CAT item selection algorithms that use analytical derivatives. We compared these approaches in CAT simulations with a variety of item selection algorithms. Our simulations also varied the features of the calibration and item pool: sample size, the presence of missing data, and the percentage of nonstandard items. In general, the results support the use of semi- and nonparametric item response functions in a CAT.


2020 ◽  
Author(s):  
Víthor Rosa Franco ◽  
Marie Wiberg

Nonparametric procedures are used to add flexibility to models. Three nonparametric item response models have been proposed, but not directly compared: the Kernel smoothing (KS-IRT); the Davidian-Curve (DC-IRT); and the Bayesian semiparametric Rasch model (SP-Rasch). The main aim of the present study is to compare the performance of these procedures in recovering simulated true scores, using sum scores as benchmarks. The secondary aim is to compare their performances in terms of practical equivalence with real data. Overall, the results show that, apart from the DC-IRT, which is the model that performs the worse, all the other models give results quite similar to those when sum scores are used. These results are followed by a discussion with practical implications and recommendations for future studies.


2020 ◽  
Vol 44 (5) ◽  
pp. 331-345
Author(s):  
Wenhao Wang ◽  
Neal Kingston

Previous studies indicated that the assumption of logistic form of parametric item response functions (IRFs) is violated often enough to be worth checking. Using nonparametric item response theory (IRT) estimation methods with the posterior predictive model checking method can obtain significance probabilities of fit statistics in a Bayesian framework by accounting for the uncertainty of the parameter estimation and can indicate the location and magnitude of misfit for an item. The purpose of this study is to check the performance of the Bayesian nonparametric method to assess the IRF fit of parametric IRT models for mixed-format tests and compare it with the existing bootstrapping nonparametric method under various conditions. The simulation study results show that the Bayesian nonparametric method can detect misfit items with higher power and lower type I error rates when the sample size is large and with lower type I error rates compared with the bootstrapping method for the conditions with nonmonotonic items. In the real-data study, several dichotomous and polytomous misfit items were identified and the location and magnitude of misfit were indicated.


2019 ◽  
Vol 28 (3S) ◽  
pp. 806-809 ◽  
Author(s):  
Christy Cassarly ◽  
Lois J. Matthews ◽  
Annie N. Simpson ◽  
Judy R. Dubno

Purpose The purpose of this report was to demonstrate the value of incorporating nonparametric item response theory in the development and refinement of patient- reported outcome measures for hearing. Conclusions Nonparametric item response theory can be useful in the development and refinement of patient-reported outcome measures for hearing. These methods are particularly useful as an alternative to exploratory factor analysis to determine the number of underlying abilities or traits represented by a scale when the items have ordered-categorical responses.


Sign in / Sign up

Export Citation Format

Share Document