scholarly journals Item Response and Response Time Model for Personality Assessment via Linear Ballistic Accumulation

2019 ◽  
Author(s):  
Kyosuke Bunji ◽  
Kensuke Okada

On the basis of a combination of linear ballistic accumulation (LBA) and item response theory (IRT), this paper proposes a new class of item response models, namely LBA IRT, which incorporates the observed response time by means of LBA. Our main objective is to develop a simple yet effective alternative to the diffusion IRT model, which is one of best-known response time (RT)-incorporating IRT models that explicitly models the underlying psychological process of the elicited item response. Through a simulation study, we show that the proposed model enables us to obtain the corresponding parameter estimates compared with the diffusion IRT model while achieving a much faster convergence speed. Furthermore, the application of the proposed model to real personality measurement data indicates that it fits the data better than the diffusion IRT model in terms of its predictive performance. Thus, the proposed model exhibits good performance and promising modeling capabilities in terms of capturing the cognitive and psychometric processes underlying the observed data.

2006 ◽  
Vol 31 (1) ◽  
pp. 63-79 ◽  
Author(s):  
Henry May

A new method is presented and implemented for deriving a scale of socioeconomic status (SES) from international survey data using a multilevel Bayesian item response theory (IRT) model. The proposed model incorporates both international anchor items and nation-specific items and is able to (a) produce student family SES scores that are internationally comparable, (b) reduce the influence of irrelevant national differences in culture on the SES scores, and (c) effectively and efficiently deal with the problem of missing data in a manner similar to Rubin’s (1987) multiple imputation approach. The results suggest that this model is superior to conventional models in terms of its fit to the data and its ability to use information collected via international surveys.


2018 ◽  
Vol 79 (3) ◽  
pp. 462-494 ◽  
Author(s):  
Ken A. Fujimoto

Advancements in item response theory (IRT) have led to models for dual dependence, which control for cluster and method effects during a psychometric analysis. Currently, however, this class of models does not include one that controls for when the method effects stem from two method sources in which one source functions differently across the aspects of another source (i.e., a nested method–source interaction). For this study, then, a Bayesian IRT model is proposed, one that accounts for such interaction among method sources while controlling for the clustering of individuals within the sample. The proposed model accomplishes these tasks by specifying a multilevel trifactor structure for the latent trait space. Details of simulations are also reported. These simulations demonstrate that this model can identify when item response data represent a multilevel trifactor structure, and it does so in data from samples as small as 250 cases nested within 50 clusters. Additionally, the simulations show that misleading estimates for the item discriminations could arise when the trifactor structure reflected in the data is not correctly accounted for. The utility of the model is also illustrated through the analysis of empirical data.


2006 ◽  
Vol 43 (1) ◽  
pp. 19-38 ◽  
Author(s):  
Steven L. Wise ◽  
Christine E. DeMars

2020 ◽  
Vol 44 (7-8) ◽  
pp. 563-565
Author(s):  
Hwanggyu Lim ◽  
Craig S. Wells

The R package irtplay provides practical tools for unidimensional item response theory (IRT) models that conveniently enable users to conduct many analyses related to IRT. For example, the irtplay includes functions for calibrating online items, scoring test-takers’ proficiencies, evaluating IRT model-data fit, and importing item and/or proficiency parameter estimates from the output of popular IRT software. In addition, the irtplay package supports mixed-item formats consisting of dichotomous and polytomous items.


2020 ◽  
pp. 001316442094989
Author(s):  
Joseph A. Rios ◽  
James Soland

As low-stakes testing contexts increase, low test-taking effort may serve as a serious validity threat. One common solution to this problem is to identify noneffortful responses and treat them as missing during parameter estimation via the effort-moderated item response theory (EM-IRT) model. Although this model has been shown to outperform traditional IRT models (e.g., two-parameter logistic [2PL]) in parameter estimation under simulated conditions, prior research has failed to examine its performance under violations to the model’s assumptions. Therefore, the objective of this simulation study was to examine item and mean ability parameter recovery when violating the assumptions that noneffortful responding occurs randomly (Assumption 1) and is unrelated to the underlying ability of examinees (Assumption 2). Results demonstrated that, across conditions, the EM-IRT model provided robust item parameter estimates to violations of Assumption 1. However, bias values greater than 0.20 SDs were observed for the EM-IRT model when violating Assumption 2; nonetheless, these values were still lower than the 2PL model. In terms of mean ability estimates, model results indicated equal performance between the EM-IRT and 2PL models across conditions. Across both models, mean ability estimates were found to be biased by more than 0.25 SDs when violating Assumption 2. However, our accompanying empirical study suggested that this biasing occurred under extreme conditions that may not be present in some operational settings. Overall, these results suggest that the EM-IRT model provides superior item and equal mean ability parameter estimates in the presence of model violations under realistic conditions when compared with the 2PL model.


2005 ◽  
Author(s):  
◽  
Yanyan Sheng

As item response theory models gain increased popularity in large scale educational and measurement testing situations, many studies have been conducted on the development and applications of unidimensional and multidimensional models. However, to date, no study has yet looked at models in the IRT framework with an overall ability dimension underlying all test items and several ability dimensions specific for each subtest. This study is to propose such a model and compare it with the conventional IRT models using Bayesian methodology. The results suggest that the proposed model offers a better way to represent the test situations not realized in existing models. The model specifications for the proposed model also give rise to implications for test developers on test designing. In addition, the proposed IRT model can be applied in other areas, such as intelligence or psychology, among others.


2020 ◽  
Author(s):  
Murat Kasli ◽  
Cengiz Zopluoglu ◽  
Sarah Linnea Toton

Response time (RT) information has recently attracted a significant amount of attention in the literature as it may provide meaningful information about item preknowledge. In this study, a Deterministic Gated Lognormal Response Time (DG-LNRT) model is proposed to identify examinees with potential item preknowledge using RT information. The proposed model is applied to a real experimental dataset provided by Toton and Maynes (2019) in which item preknowledge was manipulated, and its performance is demonstrated. Then, the performance of the DG-LNRT model is investigated through a simulation study. The model is estimated using the Bayesian framework via Stan. The results indicate that the proposed model is viable and has the potential to be useful in detecting cheating by using response time differences between compromised and uncompromised items.


2021 ◽  
Author(s):  
Yiqin Pan ◽  
Edison M. Choe

Most psychometric models of response times are primarily theory-driven, meaning they are based on various sets of assumptions about how the data should behave. Although useful in certain contexts, such models are often inadequate for the complexities of realistic testing situations and display a poor fit on empirical data. Therefore, as a functional alternative, the present study proposes a data-driven approach, an autoencoder-based response time model, to modeling response times of correctly answered responses. Also, this study introduces the application of the proposed model in anomaly detection (including aberrant examinee and item detection). The result shows this model has an acceptable performance in both response time modeling and anomaly detection.


Sign in / Sign up

Export Citation Format

Share Document