scholarly journals Structural equation modeling using ability parameters: Analysis in the situation where item parameters have been estimated by item response theory

2004 ◽  
Vol 75 (5) ◽  
pp. 381-388
Author(s):  
Hiroto Murohashi ◽  
Hideki Toyoda
2021 ◽  
Vol 46 (1) ◽  
pp. 53-67
Author(s):  
James Soland ◽  
Megan Kuhfeld

Researchers in the social sciences often obtain ratings of a construct of interest provided by multiple raters. While using multiple raters provides a way to help avoid the subjectivity of any given person’s responses, rater disagreement can be a problem. A variety of models exist to address rater disagreement in both structural equation modeling and item response theory frameworks. Recently, a model was developed by Bauer et al. (2013) and referred to as the “trifactor model” to provide applied researchers with a straightforward way of estimating scores that are purged of variance that is idiosyncratic by rater. Although the intent of the model is to be usable and interpretable, little is known about the circumstances under which it performs well, and those it does not. We conduct simulation studies to examine the performance of the trifactor model under a range of sample sizes and model specifications and then compare model fit, bias, and convergence rates.


2020 ◽  
Vol 41 (4) ◽  
pp. 207-218
Author(s):  
Mihaela Grigoraș ◽  
Andreea Butucescu ◽  
Amalia Miulescu ◽  
Cristian Opariuc-Dan ◽  
Dragoș Iliescu

Abstract. Given the fact that most of the dark personality measures are developed based on data collected in low-stake settings, the present study addresses the appropriateness of their use in high-stake contexts. Specifically, we examined item- and scale-level differential functioning of the Short Dark Triad (SD3; Paulhus & Jones, 2011 ) measure across testing contexts. The Short Dark Triad was administered to applicant ( N = 457) and non-applicant ( N = 592) samples. Item- and scale-level invariances were tested using an Item Response Theory (IRT)-based approach and a Structural Equation Modeling (SEM) approach, respectively. Results show that more than half of the SD3 items were flagged for Differential Item Functioning (DIF), and Exploratory Structural Equation Modeling (ESEM) results supported configural, but not metric invariance. Implications for theory and practice are discussed.


Author(s):  
Brian Wesolowski

This chapter presents an introductory overview of concepts that underscore the general framework of item response theory. “Item response theory” is a broad umbrella term used to describe a family of mathematical measurement models that consider observed test scores to be a function of latent, unobservable constructs. Most musical constructs cannot be directly measured and are therefore unobservable. Musical constructs can therefore only be inferred based on secondary, observable behaviors. Item response theory uses observable behaviors as probabilistic distributions of responses as a logistic function of person and item parameters in order to define latent constructs. This chapter describes philosophical, theoretical, and applied perspectives of item response theory in the context of measuring musical behaviors.


2001 ◽  
Vol 26 (1) ◽  
pp. 31-50 ◽  
Author(s):  
Haruhiko Ogasawara

The asymptotic standard errors of the estimates of the equated scores by several types of item response theory (IRT) true score equatings are provided. The first group of equatings do not use IRT equating coefficients. The second group of equatings use the IRT equating coefficients given by the moment or characteristic curve methods. The equating designs considered in this article cover those with internal or external common items and the methods with separate or simultaneous estimation of item parameters of associated tests. For the estimates of the asymptotic standard errors of the equated true scores, the method of marginal maximum likelihood estimation is employed for estimation of item parameters.


Sign in / Sign up

Export Citation Format

Share Document