scholarly journals Recent developments of parameter estimation methods in item response theory models

2021 ◽  
Author(s):  
Kazuhiro Yamaguchi

This research reviewed the recent development of parameter estimation methods in item response theory models. Various new methods to manage the computational burden problem with respect to the item factor analysis and multidimensional item response models, which have high dimensional factors, were introduced. Monte Carlo integral methods, approximation methods for marginal likelihood, new optimization methods, and techniques used in the machine learning field were employed for the estimation methods. Theoretically, a new type of asymptotical setting, that assumes infinite number of sample sizes and items, was considered. Several methods were classified apart from the maximum likelihood method or Bayesian method. Theoretical development of interval estimation methods for individual latent traits were also proposed and they provided highly accurate intervals

1998 ◽  
Vol 23 (3) ◽  
pp. 236-243 ◽  
Author(s):  
Eric T. Bradlow ◽  
Neal Thomas

Examinations that permit students to choose a subset of the items are popular despite the potential that students may take examinations of varying difficulty as a result of their choices. We provide a set of conditions for the validity of inference for Item Response Theory (IRT) models applied to data collected from choice-based examinations. Valid likelihood and Bayesian inference using standard estimation methods require (except in extraordinary circumstances) that there is no dependence, after conditioning on the observed item responses, between the examinees choices and their (potential but unobserved) responses to omitted items, as well as their latent abilities. These independence assumptions are typical of those required in much more general settings. Common low-dimensional IRT models estimated by standard methods, though potentially useful tools for educational data, do not resolve the difficult problems posed by choice-based data.


2014 ◽  
Vol 22 (2) ◽  
pp. 323-341 ◽  
Author(s):  
Dheeraj Raju ◽  
Xiaogang Su ◽  
Patricia A. Patrician

Background and Purpose: The purpose of this article is to introduce different types of item response theory models and to demonstrate their usefulness by evaluating the Practice Environment Scale. Methods: Item response theory models such as constrained and unconstrained graded response model, partial credit model, Rasch model, and one-parameter logistic model are demonstrated. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) indices are used as model selection criterion. Results: The unconstrained graded response and partial credit models indicated the best fit for the data. Almost all items in the instrument performed well. Conclusions: Although most of the items strongly measure the construct, there are a few items that could be eliminated without substantially altering the instrument. The analysis revealed that the instrument may function differently when administered to different unit types.


2017 ◽  
Vol 78 (3) ◽  
pp. 517-529 ◽  
Author(s):  
Yong Luo

Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and full-information estimation methods.


Sign in / Sign up

Export Citation Format

Share Document