scholarly journals Conditional Precision of Measurement for Test Scores: Are Conditional Standard Errors Sufficient?

2018 ◽  
Vol 79 (1) ◽  
pp. 5-18
Author(s):  
W. Alan Nicewander

This inquiry is focused on three indicators of the precision of measurement—conditional on fixed values of θ, the latent variable of item response theory (IRT). The indicators that are compared are (1) The traditional, conditional standard errors, [Formula: see text] = CSEM; (2) the IRT-based conditional standard errors, [Formula: see text] (where [Formula: see text] is the IRT score information function); and (3) a new conditional reliability coefficient, [Formula: see text]. These indicators of conditional precision are shown to be functionally related to one another. The IRT-based, conditional CSEM, [Formula: see text], and the conditional reliability, [Formula: see text], involve an estimate of the conditional true variance, [Formula: see text], which is shown to be approximately equal to the numerator of the score information function. It is argued—and illustrated with an example—that the traditional, conditional standard error, CSEM, is not sufficient for determining conditional score precision when used as the lone indicator of precision; hence, the portions of a score distribution, where scores are most-and-least precise, can be misidentified.

Author(s):  
Yousef A. Al Mahrouq

This study explored the effect of item difficulty and sample size on the accuracy of equating by using item response theory. This study used simulation data. The equating method was evaluated using an equating criterion (SEE, RMSE). Standard error of equating between the criterion scores and equated scores, and root mean square error of equating (RMSE) were used as measures to compare the method to the criterion equating. The results indicated that the large sample size reduces the standard error of the equating and reduces residuals. The results also showed that different difficulty models tend to produce smaller standard errors and the values of RMSE. The similar difficulty models tend to produce decreasing standard errors and the values of RMSE.


2021 ◽  
Vol 10 (6) ◽  
pp. 22
Author(s):  
Habis Saad Al-zboon ◽  
Amjad Farhan Alrekebat ◽  
Mahmoud Sulaiman Bani Abdelrahman

This study aims at identifying the effect of multiple-choice test items' difficulty degree on the reliability coefficient and the standard error of measurement depending on the item response theory IRT. To achieve the objectives of the study, (WinGen3) software was used to generate the IRT parameters (difficulty, discrimination, guessing) for four forms of the test. Each form consisted of (30) items with different difficulty coefficients averages (-0.24, 0.24, 0.42, 0.93). The resulting items parameters were utilized to generate the ability and responses of (3000) examinees based on the three-parameter model. These data were converted into a readable file using the (SPSS) and the (BILOG-MG3) software. Then the reliability coefficients for the four test forms, the items parameters, and the items information function were calculated, and dependence on the information function values to calculate the standard error of measurement for each item.The results of the study showed that there are statistically significant differences at the level of significance (α ≤ 0.05) between the averages of the values of the standard error of measurement attributed to the difference in the difficulty degree of the items in favor of the test with the higher difficulty coefficient. The results also found that there are apparent differences between the test reliability parameters attributed to the difficulty degree of the test according to the three-parameter model in favor of the form with the average difficulty degree.


Author(s):  
Brian Wesolowski

This chapter presents an introductory overview of concepts that underscore the general framework of item response theory. “Item response theory” is a broad umbrella term used to describe a family of mathematical measurement models that consider observed test scores to be a function of latent, unobservable constructs. Most musical constructs cannot be directly measured and are therefore unobservable. Musical constructs can therefore only be inferred based on secondary, observable behaviors. Item response theory uses observable behaviors as probabilistic distributions of responses as a logistic function of person and item parameters in order to define latent constructs. This chapter describes philosophical, theoretical, and applied perspectives of item response theory in the context of measuring musical behaviors.


2017 ◽  
Vol 78 (3) ◽  
pp. 517-529 ◽  
Author(s):  
Yong Luo

Mplus is a powerful latent variable modeling software program that has become an increasingly popular choice for fitting complex item response theory models. In this short note, we demonstrate that the two-parameter logistic testlet model can be estimated as a constrained bifactor model in Mplus with three estimators encompassing limited- and full-information estimation methods.


2001 ◽  
Vol 27 (2) ◽  
Author(s):  
Pieter Schaap

The objective of this article is to present the results of an investigation into the item and test characteristics of two tests of the Potential Index Batteries (PIB) in terms of differential item functioning (DIP) and the effect thereof on test scores of different race groups. The English Vocabulary (Index 12) and Spelling Tests (Index 22) of the PIB were analysed for white, black and coloured South Africans. Item response theory (IRT) methods were used to identify items which function differentially for white, black and coloured race groups. Opsomming Die doel van hierdie artikel is om die resultate van n ondersoek na die item- en toetseienskappe van twee PIB (Potential Index Batteries) toetse in terme van itemsydigheid en die invloed wat dit op die toetstellings van rassegroepe het, weer te gee. Die Potential Index Batteries (PIB) se Engelse Woordeskat (Index 12) en Spellingtoetse (Index 22) is ten opsigte van blanke, swart en gekleurde Suid-Afrikaners ontleed. Itemresponsteorie (IRT) is gebruik om items te identifiseer wat as sydig (DIP) vir die onderskeie rassegroepe beskou kan word.


2017 ◽  
Vol 43 (3) ◽  
pp. 259-285 ◽  
Author(s):  
Yang Liu ◽  
Ji Seung Yang

The uncertainty arising from item parameter estimation is often not negligible and must be accounted for when calculating latent variable (LV) scores in item response theory (IRT). It is particularly so when the calibration sample size is limited and/or the calibration IRT model is complex. In the current work, we treat two-stage IRT scoring as a predictive inference problem: The target of prediction is a random variable that follows the true posterior of the LV conditional on the response pattern being scored. Various Bayesian, fiducial, and frequentist prediction intervals of LV scores, which can be obtained from a simple yet generic Monte Carlo recipe, are evaluated and contrasted via simulations based on several measures of prediction quality. An empirical data example is also presented to illustrate the use of candidate methods.


2001 ◽  
Vol 26 (1) ◽  
pp. 31-50 ◽  
Author(s):  
Haruhiko Ogasawara

The asymptotic standard errors of the estimates of the equated scores by several types of item response theory (IRT) true score equatings are provided. The first group of equatings do not use IRT equating coefficients. The second group of equatings use the IRT equating coefficients given by the moment or characteristic curve methods. The equating designs considered in this article cover those with internal or external common items and the methods with separate or simultaneous estimation of item parameters of associated tests. For the estimates of the asymptotic standard errors of the equated true scores, the method of marginal maximum likelihood estimation is employed for estimation of item parameters.


Sign in / Sign up

Export Citation Format

Share Document