Testing Measurement Invariance Using MIMIC

2011 ◽  
Vol 72 (3) ◽  
pp. 469-492 ◽  
Author(s):  
Eun Sook Kim ◽  
Myeongsun Yoon ◽  
Taehun Lee

Multiple-indicators multiple-causes (MIMIC) modeling is often used to test a latent group mean difference while assuming the equivalence of factor loadings and intercepts over groups. However, this study demonstrated that MIMIC was insensitive to the presence of factor loading noninvariance, which implies that factor loading invariance should be tested through other measurement invariance testing techniques. MIMIC modeling is also used for measurement invariance testing by allowing a direct path from a grouping covariate to each observed variable. This simulation study with both continuous and categorical variables investigated the performance of MIMIC in detecting noninvariant variables under various study conditions and showed that the likelihood ratio test of MIMIC with Oort adjustment not only controlled Type I error rates below the nominal level but also maintained high power across study conditions.

2017 ◽  
Vol 78 (3) ◽  
pp. 460-481 ◽  
Author(s):  
Margarita Olivera-Aguilar ◽  
Samuel H. Rikoon ◽  
Oscar Gonzalez ◽  
Yasemin Kisbu-Sakarya ◽  
David P. MacKinnon

When testing a statistical mediation model, it is assumed that factorial measurement invariance holds for the mediating construct across levels of the independent variable X. The consequences of failing to address the violations of measurement invariance in mediation models are largely unknown. The purpose of the present study was to systematically examine the impact of mediator noninvariance on the Type I error rates, statistical power, and relative bias in parameter estimates of the mediated effect in the single mediator model. The results of a large simulation study indicated that, in general, the mediated effect was robust to violations of invariance in loadings. In contrast, most conditions with violations of intercept invariance exhibited severely positively biased mediated effects, Type I error rates above acceptable levels, and statistical power larger than in the invariant conditions. The implications of these results are discussed and recommendations are offered.


2019 ◽  
Vol 44 (4) ◽  
pp. 282-295
Author(s):  
HyeSun Lee ◽  
Weldon Z. Smith

This study examined whether cutoffs in fit indices suggested for traditional formats with maximum likelihood estimators can be utilized to assess model fit and to test measurement invariance when a multiple group confirmatory factor analysis was employed for the Thurstonian item response theory (IRT) model. Regarding the performance of the evaluation criteria, detection of measurement non-invariance and Type I error rates were examined. The impact of measurement non-invariance on estimated scores in the Thurstonian IRT model was also examined through accuracy and efficiency in score estimation. The fit indices used for the evaluation of model fit performed well. Among six cutoffs for changes in model fit indices, only ΔCFI > .01 and ΔNCI > .02 detected metric non-invariance when the medium magnitude of non-invariance occurred and none of the cutoffs performed well to detect scalar non-invariance. Based on the generated sampling distributions of fit index differences, this study suggested ΔCFI > .001 and ΔNCI > .004 for scalar non-invariance and ΔCFI > .007 for metric non-invariance. Considering Type I error rate control and detection rates of measurement non-invariance, ΔCFI was recommended for measurement non-invariance tests for forced-choice format data. Challenges in measurement non-invariance tests in the Thurstonian IRT model were discussed along with the direction for future research to enhance the utility of forced-choice formats in test development for cross-cultural and international settings.


2019 ◽  
Author(s):  
Melissa Angelina Rodgers ◽  
James E Pustejovsky

Selective reporting of results based on their statistical significance threatens the validity of meta-analytic findings. A variety of techniques for detecting selective reporting, publication bias, or small-study effects are available and are routinely used in research syntheses. Most such techniques are univariate, in that they assume that each study contributes a single, independent effect size estimate to the meta-analysis. In practice, however, studies often contribute multiple, statistically dependent effect size estimates, such as for multiple measures of a common outcome construct. Many methods are available for meta-analyzing dependent effect sizes, but methods for investigating selective reporting while also handling effect size dependencies require further investigation. Using Monte Carlo simulations, we evaluate three available univariate tests for small-study effects or selective reporting, including the Trim & Fill test, Egger's regression test, and a likelihood ratio test from a three-parameter selection model (3PSM), when dependence is ignored or handled using ad hoc techniques. We also examine two variants of Egger’s regression test that incorporate robust variance estimation (RVE) or multi-level meta-analysis (MLMA) to handle dependence. Simulation results demonstrate that ignoring dependence inflates Type I error rates for all univariate tests. Variants of Egger's regression maintain Type I error rates when dependent effect sizes are sampled or handled using RVE or MLMA. The 3PSM likelihood ratio test does not fully control Type I error rates. With the exception of the 3PSM, all methods have limited power to detect selection bias except under strong selection for statistically significant effects.


2021 ◽  
Author(s):  
Joseph Rios

Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence evaluations of scale property equivalence. Second, if attaining full scalar invariance, the degree to which differential effort can bias subgroup mean comparisons is unknown. To address these issues, a simulation study was conducted to examine the influence of differential noneffortful responding (NER) on evaluations of measurement invariance and latent mean comparisons. Results showed that as differential rates of NER grew, increased type I errors of measurement invariance were observed only at the metric invariance level, while no negative effects were apparent for configural or scalar invariance. When full scalar invariance was correctly attained, differential NER led to bias of mean score comparisons as large as 0.18 standard deviations with a differential NER rate of 7%. These findings suggest that test users should evaluate and document potential differential NER prior to both conducting measurement quality analyses and reporting disaggregated subgroup mean performance.


2016 ◽  
Vol 42 (1) ◽  
pp. 46-68 ◽  
Author(s):  
Sandip Sinharay

An increasing concern of producers of educational assessments is fraudulent behavior during the assessment (van der Linden, 2009). Benefiting from item preknowledge (e.g., Eckerly, 2017; McLeod, Lewis, & Thissen, 2003) is one type of fraudulent behavior. This article suggests two new test statistics for detecting individuals who may have benefited from item preknowledge; the statistics can be used for both nonadaptive and adaptive assessments that may include either or both of dichotomous and polytomous items. Each new statistic has an asymptotic standard normal n distribution. It is demonstrated in detailed simulation studies that the Type I error rates of the new statistics are close to the nominal level and the values of power of the new statistics are larger than those of an existing statistic for addressing the same problem.


2021 ◽  
pp. 001316442199042
Author(s):  
Joseph A. Rios

Low test-taking effort as a validity threat is common when examinees perceive an assessment context to have minimal personal value. Prior research has shown that in such contexts, subgroups may differ in their effort, which raises two concerns when making subgroup mean comparisons. First, it is unclear how differential effort could influence evaluations of scale property equivalence. Second, if attaining full scalar invariance, the degree to which differential effort can bias subgroup mean comparisons is unknown. To address these issues, a simulation study was conducted to examine the influence of differential noneffortful responding (NER) on evaluations of measurement invariance and latent mean comparisons. Results showed that as differential rates of NER grew, increased Type I errors of measurement invariance were observed only at the metric invariance level, while no negative effects were apparent for configural or scalar invariance. When full scalar invariance was correctly attained, differential NER led to bias of mean score comparisons as large as 0.18 standard deviations with a differential NER rate of 7%. These findings suggest that test users should evaluate and document potential differential NER prior to both conducting measurement quality analyses and reporting disaggregated subgroup mean performance.


2019 ◽  
Author(s):  
John Kitchener Sakaluk ◽  
Robyn Kilshaw ◽  
Alexandra Noelle Fisher ◽  
Connor Emont Leshner

Comparisons of group means, variances, correlations, and/or regression slopes involving psychological variables rely on an assumption of measurement invariance–that the latent variables under investigation have equivalent meaning and measurement across group. When measures are noninvariant, replicability suffers, as comparisons are either conceptually meaningless, or hindered by inflated Type I error rates. We propose that the failure to account for interdependence amongst dyad members when testing measurement invariance may be a potential source of unreplicable findings in relationship research. We develop fully dyadic versions of invariance-testing in an Actor-Partner Interdependence Model framework, and propose a Registered Report for gauging the extent of dyadic (non)invariance in romantic relationship research.


2020 ◽  
Author(s):  
Alyssa Counsell ◽  
Rob Cribbie

Measurement Invariance (MI) is often concluded from a nonsignificant chi-square difference test. Researchers have also proposed using change in goodness-of-fit indices (ΔGOFs) instead. Both of these commonly used methods for testing MI have important limitations. To combat these issues, To combat these issues, it was proposed using an equivalence test (EQ) to replace the chi-square difference test commonly used to test MI. Due to concerns with the EQ's power, and adjusted version (EQ-A) was created, but provides little evaluation of either procedure. The current study evaluated the Type I error and power of both the EQ and EQ-A, and compared their performance to that of the traditional chi-square difference test and ΔGOFs. The EQ was the only procedure that maintained empirical error rates below the nominal alpha level. Results also highlight that the EQ requires larger sample sizes than traditional difference-based approaches or using equivalence bounds based on larger than conventional RMSEA values (e.g., > .05) to ensure adequate power rates. We do not recommend the proposed adjustment (EQ-A) over the EQ.


Methodology ◽  
2015 ◽  
Vol 11 (2) ◽  
pp. 45-54 ◽  
Author(s):  
Ingrid Koller ◽  
Marco Johannes Maier ◽  
Reinhold Hatzinger

Measurement invariance is not only an important requirement of tests but also a central point in the examination of the Rasch model. Ponocny (2001) suggested quasi-exact tests for small samples which allow for formulating test-statistics based on matrices obtained using Monte Carlo methods. The purpose of the present study was to analyze the type-I error rates and the empirical power of two test-statistics for the assumption of measurement invariance in comparison with Andersen’s likelihood ratio test (1973). Each simulation was based on 10,000 replications and was a function of sample size (n = 30, 50, 100, 200), test length (k = 5, 9, 17), varying number of items exhibiting model violation, magnitude of violation, and different ability distributions. The results indicate that it is possible to detect large model violations on item level with samples of n = 50 or n = 100, and even weak violations with n = 200. Additionally, the results showed that it is possible to investigate very small samples where a parametric approach is not possible, which is one of the most important advantages of quasi-exact tests.


Sign in / Sign up

Export Citation Format

Share Document