scholarly journals Effect of Item Difficulty and Sample Size on the Accuracy of Equating by Using Item Response Theory

Author(s):  
Yousef A. Al Mahrouq

This study explored the effect of item difficulty and sample size on the accuracy of equating by using item response theory. This study used simulation data. The equating method was evaluated using an equating criterion (SEE, RMSE). Standard error of equating between the criterion scores and equated scores, and root mean square error of equating (RMSE) were used as measures to compare the method to the criterion equating. The results indicated that the large sample size reduces the standard error of the equating and reduces residuals. The results also showed that different difficulty models tend to produce smaller standard errors and the values of RMSE. The similar difficulty models tend to produce decreasing standard errors and the values of RMSE.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yunsoo Lee ◽  
Ji Hoon Song ◽  
Soo Jung Kim

Purpose This paper aims to validate the Korean version of the decent work scale and examine the relationship between decent work and work engagement. Design/methodology/approach After completing translation and back translation, the authors surveyed 266 Korean employees from various organizations via network sampling. They assessed Rasch’s model based on item response theory. In addition, they used classical test theory to evaluate the decent work scale’s validity and reliability. Findings The authors found that the current version of the decent work scale has good validity, reliability and item difficulty, and decent work has a positive relationship with work engagement. However, based on item response theory, the assessment showed that three of the items are extremely similar to another item within the same dimension, implying that the items are unable to discriminate among individual traits. Originality/value This study validated the decent work scale in a Korean work environment using Rasch’s (1960) model from the perspective of item response theory.


2017 ◽  
Vol 6 (4) ◽  
pp. 113
Author(s):  
Esin Yilmaz Kogar ◽  
Hülya Kelecioglu

The purpose of this research is to first estimate the item and ability parameters and the standard error values related to those parameters obtained from Unidimensional Item Response Theory (UIRT), bifactor (BIF) and Testlet Response Theory models (TRT) in the tests including testlets, when the number of testlets, number of independent items, and sample size change, and then to compare the obtained results. Mathematic test in PISA 2012 was employed as the data collection tool, and 36 items were used to constitute six different data sets containing different numbers of testlets and independent items. Subsequently, from these constituted data sets, three different sample sizes of 250, 500 and 1000 persons were selected randomly. When the findings of the research were examined, it was determined that, generally the lowest mean error values were those obtained from UIRT, and TRT yielded a mean of error estimation lower than that of BIF. It was found that, under all conditions, models which take into consideration the local dependency have provided a better model-data compatibility than UIRT, generally there is no meaningful difference between BIF and TRT, and both models can be used for those data sets. It can be said that when there is a meaningful difference between those two models, generally BIF yields a better result. In addition, it has been determined that, in each sample size and data set, item and ability parameters and correlations of errors of the parameters are generally high.


2001 ◽  
Vol 26 (1) ◽  
pp. 31-50 ◽  
Author(s):  
Haruhiko Ogasawara

The asymptotic standard errors of the estimates of the equated scores by several types of item response theory (IRT) true score equatings are provided. The first group of equatings do not use IRT equating coefficients. The second group of equatings use the IRT equating coefficients given by the moment or characteristic curve methods. The equating designs considered in this article cover those with internal or external common items and the methods with separate or simultaneous estimation of item parameters of associated tests. For the estimates of the asymptotic standard errors of the equated true scores, the method of marginal maximum likelihood estimation is employed for estimation of item parameters.


2001 ◽  
Vol 9 (1) ◽  
pp. 5-22 ◽  
Author(s):  
Cheryl T. Beck ◽  
Robert K. Gable

The benefits of item response theory (IRT) analysis in obtaining empirical support for construct validity make it an essential step in the instrument development process. IRT analysis can result in finer construct interpretations that lead to more thorough descriptions of low- and high-scoring respondents. A critical function of IRT is its ability to determine the adequacy with which the attitude continuum underlying each dimension is assessed by the respective items in an instrument. Many nurse researchers, however, are not reaping the benefits of IRT in the development of affective instruments. The purpose of this article is to familiarize nurse researchers with this valuable approach through a description of the Facets computer program. Facets uses a one parameter (i.e., item difficulty) Rasch measurement model. Data from a survey of 525 new mothers that assessed the psychometric properties of the Postpartum Depresssion Screening Scale are used to illustrate the Facets program. It is hoped that IRT will gain increased prominence in affective instrument development as more nurse researchers become aware of computer programs such as Facets to assist in analysis.


1995 ◽  
Vol 20 (4) ◽  
pp. 337-348 ◽  
Author(s):  
Lingjia Zeng ◽  
Ronald T. Cope

Large-sample standard errors of linear equating for the counterbalanced design are derived using the general delta method. Computer simulations were conducted to compare the standard errors derived by Lord under the normality assumption with those derived in this article without such an assumption. The standard errors derived without the normality assumption were found to be more accurate than those derived with the normality assumption in an example using large sample size and moderately skewed score distributions. In an example using nearly symmetric distributions, the standard errors computed with the normality assumption were found to be at least as accurate as those derived without the normality assumption.


2015 ◽  
Vol 58 (3) ◽  
pp. 865-877 ◽  
Author(s):  
Gerasimos Fergadiotis ◽  
Stacey Kellough ◽  
William D. Hula

Purpose In this study, we investigated the fit of the Philadelphia Naming Test (PNT; Roach, Schwartz, Martin, Grewal, & Brecher, 1996) to an item-response-theory measurement model, estimated the precision of the resulting scores and item parameters, and provided a theoretical rationale for the interpretation of PNT overall scores by relating explanatory variables to item difficulty. This article describes the statistical model underlying the computer adaptive PNT presented in a companion article (Hula, Kellough, & Fergadiotis, 2015). Method Using archival data, we evaluated the fit of the PNT to 1- and 2-parameter logistic models and examined the precision of the resulting parameter estimates. We regressed the item difficulty estimates on three predictor variables: word length, age of acquisition, and contextual diversity. Results The 2-parameter logistic model demonstrated marginally better fit, but the fit of the 1-parameter logistic model was adequate. Precision was excellent for both person ability and item difficulty estimates. Word length, age of acquisition, and contextual diversity all independently contributed to variance in item difficulty. Conclusions Item-response-theory methods can be productively used to analyze and quantify anomia severity in aphasia. Regression of item difficulty on lexical variables supported the validity of the PNT and interpretation of anomia severity scores in the context of current word-finding models.


Sign in / Sign up

Export Citation Format

Share Document