scholarly journals The Impact of Markov Chain Convergence on Estimation of Mixture IRT Model Parameters

2020 ◽  
Vol 80 (5) ◽  
pp. 975-994
Author(s):  
Yoonsun Jang ◽  
Allan S. Cohen

A nonconverged Markov chain can potentially lead to invalid inferences about model parameters. The purpose of this study was to assess the effect of a nonconverged Markov chain on the estimation of parameters for mixture item response theory models using a Markov chain Monte Carlo algorithm. A simulation study was conducted to investigate the accuracy of model parameters estimated with different degree of convergence. Results indicated the accuracy of the estimated model parameters for the mixture item response theory models decreased as the number of iterations of the Markov chain decreased. In particular, increasing the number of burn-in iterations resulted in more accurate estimation of mixture IRT model parameters. In addition, the different methods for monitoring convergence of a Markov chain resulted in different degrees of convergence despite almost identical accuracy of estimation.

2005 ◽  
Vol 30 (2) ◽  
pp. 189-212 ◽  
Author(s):  
Jean-Paul Fox

The randomized response (RR) technique is often used to obtain answers on sensitive questions. A new method is developed to measure latent variables using the RR technique because direct questioning leads to biased results. Within the RR technique is the probability of the true response modeled by an item response theory (IRT) model. The RR technique links the observed item response with the true item response. Attitudes can be measured without knowing the true individual answers. This approach makes also a hierarchical analysis possible, with explanatory variables, given observed RR data. All model parameters can be estimated simultaneously using Markov chain Monte Carlo. The randomized item response technique was applied in a study on cheating behavior of students at a Dutch University. In this study, it is of interest if students’ cheating behavior differs across studies and if there are indicators that can explain differences in cheating behaviors.


2018 ◽  
Vol 29 (1) ◽  
pp. 35-44
Author(s):  
Nell Sedransk

This article is about FMCSA data and its analysis. The article responds to the two-part question: How does an Item Response Theory (IRT) model work differently . . . or better than any other model? The response to the first part is a careful, completely non-technical exposition of the fundamentals for IRT models. It differentiates IRT models from other models by providing the rationale underlying IRT modeling and by using graphs to illustrate two key properties for data items. The response to the second part of the question about superiority of an IRT model is, “it depends.” For FMCSA data, serious challenges arise from complexity of the data and from heterogeneity of the carrier industry. Questions are posed that will need to be addressed to determine the success of the actual model developed and of the scoring system.


2015 ◽  
Author(s):  
◽  
Ting Wang

Measurement invariance is a fundamental assumption in item response theory models, where the relationship between a latent construct (ability) and observed item responses are of interest. Violation of this assumption would render the scale misinterpreted or cause systematic bias against certain groups of people. While a number of methods have been proposed to detect measurement invariance violations, they all require definition of problematic model parameters and respondent grouping information in advance. However, these "locating" pieces of information are typically unknown in practice. As an alternative, this dissertation focuses on a family of recently-proposed tests based on stochastic processes of casewise derivatives of the likelihood function (i.e., scores). These score-based tests only require estimation of the null model (when measurement invariance assumption holds), with problematic subgroups of respondents and model parameters being identified in a factor-analytic, continuous data context. In this dissertation, I aim to generalize these tests to item response theory models for categorical data. The tests' theoretical background and implementation are detailed. The tests' ability to identify problematic subgroups and model parameters is studied via simulation. An empirical example involving the tests is also provided. In the end, potential applications and future development are discussed.


2014 ◽  
Vol 22 (2) ◽  
pp. 323-341 ◽  
Author(s):  
Dheeraj Raju ◽  
Xiaogang Su ◽  
Patricia A. Patrician

Background and Purpose: The purpose of this article is to introduce different types of item response theory models and to demonstrate their usefulness by evaluating the Practice Environment Scale. Methods: Item response theory models such as constrained and unconstrained graded response model, partial credit model, Rasch model, and one-parameter logistic model are demonstrated. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) indices are used as model selection criterion. Results: The unconstrained graded response and partial credit models indicated the best fit for the data. Almost all items in the instrument performed well. Conclusions: Although most of the items strongly measure the construct, there are a few items that could be eliminated without substantially altering the instrument. The analysis revealed that the instrument may function differently when administered to different unit types.


2017 ◽  
Vol 6 (4) ◽  
pp. 113
Author(s):  
Esin Yilmaz Kogar ◽  
Hülya Kelecioglu

The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and sample size change, and then to compare the obtained results. Mathematic test in PISA 2012 was employed as the data collection tool, and 36 items were used to constitute six different data sets containing different numbers of testlets and independent items. Subsequently, from these constituted data sets, three different sample sizes of 250, 500 and 1000 persons were selected randomly. When the findings of the research were examined, it was determined that, generally the lowest mean error values were those obtained from UIRT, and TRT yielded a mean of error estimation lower than that of BIF. It was found that, under all conditions, models which take into consideration the local dependency have provided a better model-data compatibility than UIRT, generally there is no meaningful difference between BIF and TRT, and both models can be used for those data sets. It can be said that when there is a meaningful difference between those two models, generally BIF yields a better result. In addition, it has been determined that, in each sample size and data set, item and ability parameters and correlations of errors of the parameters are generally high.


Sign in / Sign up

Export Citation Format

Share Document