scholarly journals Bartlett Correction of Test Statistics in Structural Equation Modeling

2007 ◽  
Vol 55 (3) ◽  
pp. 382-392 ◽  
Author(s):  
KENSUKE OKADA ◽  
TAKAHIRO HOSHINO ◽  
KAZUO SHIGEMASU
Author(s):  
Lisa J. Jobst ◽  
Max Auerswald ◽  
Morten Moshagen

AbstractIn structural equation modeling, several corrections to the likelihood-ratio model test statistic have been developed to counter the effects of non-normal data. Previous robustness studies investigating the performance of these corrections typically induced non-normality in the indicator variables. However, non-normality in the indicators can originate from non-normal errors or non-normal latent factors. We conducted a Monte Carlo simulation to analyze the effect of non-normality in factors and errors on six different test statistics based on maximum likelihood estimation by evaluating the effect on empirical rejection rates and derived indices (RMSEA and CFI) for different degrees of non-normality and sample sizes. We considered the uncorrected likelihood-ratio model test statistic and the Satorra–Bentler scaled test statistic with Bartlett correction, as well as the mean and variance adjusted test statistic, a scale-shifted approach, a third moment-adjusted test statistic, and an approach drawing inferences from the relevant asymptotic chi-square mixture distribution. The results indicate that the values of the uncorrected test statistic—compared to values under normality—are associated with a severely inflated type I error rate when latent variables are non-normal, but virtually no differences occur when errors are non-normal. Although no general pattern regarding the source of non-normality for all analyzed measures of fit can be derived, the Satorra–Bentler scaled test statistic with Bartlett correction performed satisfactorily across conditions.


2014 ◽  
Vol 35 (4) ◽  
pp. 201-211 ◽  
Author(s):  
André Beauducel ◽  
Anja Leue

It is shown that a minimal assumption should be added to the assumptions of Classical Test Theory (CTT) in order to have positive inter-item correlations, which are regarded as a basis for the aggregation of items. Moreover, it is shown that the assumption of zero correlations between the error score estimates is substantially violated in the population of individuals when the number of items is small. Instead, a negative correlation between error score estimates occurs. The reason for the negative correlation is that the error score estimates for different items of a scale are based on insufficient true score estimates when the number of items is small. A test of the assumption of uncorrelated error score estimates by means of structural equation modeling (SEM) is proposed that takes this effect into account. The SEM-based procedure is demonstrated by means of empirical examples based on the Edinburgh Handedness Inventory and the Eysenck Personality Questionnaire-Revised.


2020 ◽  
Vol 41 (4) ◽  
pp. 207-218
Author(s):  
Mihaela Grigoraș ◽  
Andreea Butucescu ◽  
Amalia Miulescu ◽  
Cristian Opariuc-Dan ◽  
Dragoș Iliescu

Abstract. Given the fact that most of the dark personality measures are developed based on data collected in low-stake settings, the present study addresses the appropriateness of their use in high-stake contexts. Specifically, we examined item- and scale-level differential functioning of the Short Dark Triad (SD3; Paulhus & Jones, 2011 ) measure across testing contexts. The Short Dark Triad was administered to applicant ( N = 457) and non-applicant ( N = 592) samples. Item- and scale-level invariances were tested using an Item Response Theory (IRT)-based approach and a Structural Equation Modeling (SEM) approach, respectively. Results show that more than half of the SD3 items were flagged for Differential Item Functioning (DIF), and Exploratory Structural Equation Modeling (ESEM) results supported configural, but not metric invariance. Implications for theory and practice are discussed.


Sign in / Sign up

Export Citation Format

Share Document