Supplemental Material for On the Likelihood Ratio Test in Structural Equation Modeling When Parameters Are Subject to Boundary Constraints

2006 ◽  
Vol 11 (4) ◽  
pp. 439-455 ◽  
Author(s):  
Reinoud D. Stoel ◽  
Francisca Galindo Garre ◽  
Conor Dolan ◽  
Godfried van den Wittenboer

2017 ◽  
Vol 33 (3) ◽  
pp. 534-550
Author(s):  
Theodore W. Anderson

Consider testing the null hypothesis that a single structural equation has specified coefficients. The alternative hypothesis is that the relevant part of the reduced form matrix has proper rank, that is, that the equation is identified. The usual linear model with normal disturbances is invariant with respect to linear transformations of the endogenous and of the exogenous variables. When the disturbance covariance matrix is known, it can be set to the identity, and the invariance of the endogenous variables is with respect to orthogonal transformations. The likelihood ratio test is invariant with respect to these transformations and is the best invariant test. Furthermore it is admissible in the class of all tests. Any other test has lower power and/or higher significance level. In particular, this likelihood ratio test dominates a test based on the Two-Stage Least Squares estimator.


Biometrika ◽  
2017 ◽  
Vol 105 (1) ◽  
pp. 225-232 ◽  
Author(s):  
Yong Chen ◽  
Jing Huang ◽  
Yang Ning ◽  
Kung-Yee Liang ◽  
Bruce G Lindsay

Author(s):  
Lisa J. Jobst ◽  
Max Auerswald ◽  
Morten Moshagen

AbstractIn structural equation modeling, several corrections to the likelihood-ratio model test statistic have been developed to counter the effects of non-normal data. Previous robustness studies investigating the performance of these corrections typically induced non-normality in the indicator variables. However, non-normality in the indicators can originate from non-normal errors or non-normal latent factors. We conducted a Monte Carlo simulation to analyze the effect of non-normality in factors and errors on six different test statistics based on maximum likelihood estimation by evaluating the effect on empirical rejection rates and derived indices (RMSEA and CFI) for different degrees of non-normality and sample sizes. We considered the uncorrected likelihood-ratio model test statistic and the Satorra–Bentler scaled test statistic with Bartlett correction, as well as the mean and variance adjusted test statistic, a scale-shifted approach, a third moment-adjusted test statistic, and an approach drawing inferences from the relevant asymptotic chi-square mixture distribution. The results indicate that the values of the uncorrected test statistic—compared to values under normality—are associated with a severely inflated type I error rate when latent variables are non-normal, but virtually no differences occur when errors are non-normal. Although no general pattern regarding the source of non-normality for all analyzed measures of fit can be derived, the Satorra–Bentler scaled test statistic with Bartlett correction performed satisfactorily across conditions.


1997 ◽  
Vol 61 (4) ◽  
pp. 335-350 ◽  
Author(s):  
A. P. MORRIS ◽  
J. C. WHITTAKER ◽  
R. N. CURNOW

2014 ◽  
Vol 35 (4) ◽  
pp. 201-211 ◽  
Author(s):  
André Beauducel ◽  
Anja Leue

It is shown that a minimal assumption should be added to the assumptions of Classical Test Theory (CTT) in order to have positive inter-item correlations, which are regarded as a basis for the aggregation of items. Moreover, it is shown that the assumption of zero correlations between the error score estimates is substantially violated in the population of individuals when the number of items is small. Instead, a negative correlation between error score estimates occurs. The reason for the negative correlation is that the error score estimates for different items of a scale are based on insufficient true score estimates when the number of items is small. A test of the assumption of uncorrelated error score estimates by means of structural equation modeling (SEM) is proposed that takes this effect into account. The SEM-based procedure is demonstrated by means of empirical examples based on the Edinburgh Handedness Inventory and the Eysenck Personality Questionnaire-Revised.


2020 ◽  
Vol 41 (4) ◽  
pp. 207-218
Author(s):  
Mihaela Grigoraș ◽  
Andreea Butucescu ◽  
Amalia Miulescu ◽  
Cristian Opariuc-Dan ◽  
Dragoș Iliescu

Abstract. Given the fact that most of the dark personality measures are developed based on data collected in low-stake settings, the present study addresses the appropriateness of their use in high-stake contexts. Specifically, we examined item- and scale-level differential functioning of the Short Dark Triad (SD3; Paulhus & Jones, 2011 ) measure across testing contexts. The Short Dark Triad was administered to applicant ( N = 457) and non-applicant ( N = 592) samples. Item- and scale-level invariances were tested using an Item Response Theory (IRT)-based approach and a Structural Equation Modeling (SEM) approach, respectively. Results show that more than half of the SD3 items were flagged for Differential Item Functioning (DIF), and Exploratory Structural Equation Modeling (ESEM) results supported configural, but not metric invariance. Implications for theory and practice are discussed.


Sign in / Sign up

Export Citation Format

Share Document