Why We Need to Abandon Fixed Cutoffs for Goodness-of-Fit Indices: A Comprehensive Simulation and Possible Solutions
To evaluate model fit in confirmatory factor analysis, researchers compare goodness-of-fit indices (GOFs) against fixed cutoff values derived from simulation studies. However, these cutoffs may not be as broadly applicable as researchers typically assume, especially when used in settings not covered in the simulation scenarios from which these cutoffs were derived. Thus, we aim to evaluate (1) the sensitivity of GOFs to model misspecification and (2) their susceptibility to extraneous data and analysis characteristics (i.e., estimator, number of indicators, number of response options, distribution of response options, loading magnitude, sample size, and factor correlation). Our study includes the most comprehensive simulation on that matter to date. This enables us to uncover several previously unknown or at least underappreciated issues with GOFs. All widely used GOFs are far more susceptible to extraneous influences in even more complex ways than generally appreciated, and their sensitivity to misspecifications in factor loadings and factor correlations varies significantly across different scenarios. For instance, one of those strong influences on all GOFs constituted the magnitude of factor loadings (either as a main effect or two-way interaction with other characteristics). The strong susceptibility of GOFs to data and analysis characteristics showed that the practice of judging the fit of models against fixed cutoffs is more problematic than so-far assumed. Hitherto unnoticed effects on GOFs imply that no general cutoff rules can be applied to evaluate model fit. We discuss alternatives for assessing model fit and develop a new approach to tailor cutoffs for GOFs to research settings.