scholarly journals Effectiveness of Item Response Theory (IRT) Proficiency Estimation Methods Under Adaptive Multistage Testing

2015 ◽  
Vol 2015 (1) ◽  
pp. 1-19 ◽  
Author(s):  
Sooyeon Kim ◽  
Tim Moses ◽  
Hanwook Henry Yoo
2017 ◽  
Vol 43 (1) ◽  
pp. 116-129 ◽  
Author(s):  
Ji Seung Yang ◽  
Xiaying Zheng

The purpose of this article is to introduce and review the capability and performance of the Stata item response theory ( irt) package that is available from Stata V.14, 2015. Using a simulated data set and a publicly available item response data set extracted from Programme of International Student Assessment, we review the irt package from applied and methodological researchers’ perspectives. After discussing the supported item response models and estimation methods implemented in the package, we demonstrate the accuracy of estimation compared to results from other typically used software packages. Other application features for differential item function analysis, scoring, and the package generating graphs are also reviewed.


1998 ◽  
Vol 23 (3) ◽  
pp. 236-243 ◽  
Author(s):  
Eric T. Bradlow ◽  
Neal Thomas

Examinations that permit students to choose a subset of the items are popular despite the potential that students may take examinations of varying difficulty as a result of their choices. We provide a set of conditions for the validity of inference for Item Response Theory (IRT) models applied to data collected from choice-based examinations. Valid likelihood and Bayesian inference using standard estimation methods require (except in extraordinary circumstances) that there is no dependence, after conditioning on the observed item responses, between the examinees choices and their (potential but unobserved) responses to omitted items, as well as their latent abilities. These independence assumptions are typical of those required in much more general settings. Common low-dimensional IRT models estimated by standard methods, though potentially useful tools for educational data, do not resolve the difficult problems posed by choice-based data.


2021 ◽  
Author(s):  
Kazuhiro Yamaguchi

This research reviewed the recent development of parameter estimation methods in item response theory models. Various new methods to manage the computational burden problem with respect to the item factor analysis and multidimensional item response models, which have high dimensional factors, were introduced. Monte Carlo integral methods, approximation methods for marginal likelihood, new optimization methods, and techniques used in the machine learning field were employed for the estimation methods. Theoretically, a new type of asymptotical setting, that assumes infinite number of sample sizes and items, was considered. Several methods were classified apart from the maximum likelihood method or Bayesian method. Theoretical development of interval estimation methods for individual latent traits were also proposed and they provided highly accurate intervals


Sign in / Sign up

Export Citation Format

Share Document