Effects of sample size, model specification and factor loadings on the GFI in confirmatory factor analysis

1998 ◽  
Vol 25 (1) ◽  
pp. 85-90 ◽  
Author(s):  
M. Shevlin ◽  
J.N.V. Miles
2020 ◽  
Vol 23 ◽  
Author(s):  
Daniel Ondé ◽  
Jesús M. Alvarado

Abstract There is a series of conventions governing how Confirmatory Factor Analysis gets applied, from minimum sample size to the number of items representing each factor, to estimation of factor loadings so they may be interpreted. In their implementation, these rules sometimes lead to unjustified decisions, because they sideline important questions about a model’s practical significance and validity. Conducting a Monte Carlo simulation study, the present research shows the compensatory effects of sample size, number of items, and strength of factor loadings on the stability of parameter estimation when Confirmatory Factor Analysis is conducted. The results point to various scenarios in which bad decisions are easy to make and not detectable through goodness of fit evaluation. In light of the findings, these authors alert researchers to the possible consequences of arbitrary rule following while validating factor models. Before applying the rules, we recommend that the applied researcher conduct their own simulation studies, to determine what conditions would guarantee a stable solution for the particular factor model in question.


2021 ◽  
pp. 001316442110089
Author(s):  
Yuanshu Fu ◽  
Zhonglin Wen ◽  
Yang Wang

Composite reliability, or coefficient omega, can be estimated using structural equation modeling. Composite reliability is usually estimated under the basic independent clusters model of confirmatory factor analysis (ICM-CFA). However, due to the existence of cross-loadings, the model fit of the exploratory structural equation model (ESEM) is often found to be substantially better than that of ICM-CFA. The present study first illustrated the method used to estimate composite reliability under ESEM and then compared the difference between ESEM and ICM-CFA in terms of composite reliability estimation under various indicators per factor, target factor loadings, cross-loadings, and sample sizes. The results showed no apparent difference in using ESEM or ICM-CFA for estimating composite reliability, and the rotation type did not affect the composite reliability estimates generated by ESEM. An empirical example was given as further proof of the results of the simulation studies. Based on the present study, we suggest that if the model fit of ESEM (regardless of the utilized rotation criteria) is acceptable but that of ICM-CFA is not, the composite reliability estimates based on the above two models should be similar. If the target factor loadings are relatively small, researchers should increase the number of indicators per factor or increase the sample size.


Methodology ◽  
2007 ◽  
Vol 3 (2) ◽  
pp. 67-80 ◽  
Author(s):  
Carmen Ximénez

Abstract. Two general issues central to the design of a study are subject sampling and variable sampling. Previous research has examined their effects on factor pattern recovery in the context of exploratory factor analysis. The present paper focuses on recovery of weak factors and reports two simulation studies in the context of confirmatory factor analysis. Conditions investigated include the estimation method (ML vs. ULS), sample size (100, 300, and 500), number of variables per factor (3, 4, or 5), loading size in the weak factor (.25 or .35), and factor correlation (null vs. moderate). Results show that both subject and variable sample size affect the recovery of weak factors, particularly if factors are not correlated. A small but consistent pattern of differences between methods occurs, which favors the use of ULS. Additionally, the frequency of nonconvergent and improper solutions is also affected by the same variables.


2017 ◽  
Vol 72 (4) ◽  
pp. 429-447 ◽  
Author(s):  
Ady Milman ◽  
Anita Zehrer ◽  
Asli D.A. Tasci

Purpose Previous mountain tourism research addressed economic, environmental, social and political impacts. Because limited studies evaluated visitors’ perception of their experience, this study aims to examine the tangible and intangible visitor experience in a Tyrolean alpine tourist attraction. Design/methodology/approach The study adopted Klaus and Maklan’s (2012) customer experience model, suggesting that customers base their experience perception on the quality of product experience, outcome focus, moments of truth and peace-of-mind. Their model was used to validate the impact on overall customer experience quality at the mountain attraction through conducting a structured survey with 207 face-to-face interviews on-site. Findings The results of the confirmatory factor analysis did not confirm the four-dimensional structure, probably due to the differences between mountain tourism experience and the mortgage lending experience in the original study. Instead, principal component analysis suggested a different dimensional structure of components that were arbitrarily named as functional, social, comparative and normative aspects of the visitors’ experience. Research limitations/implications The results are based on a sample in a given period of time, using convenience sampling. While the sample size satisfied the data analysis requirements, confirmatory factor analysis would benefit from a larger sample size. Practical implications Consumer experience dimensions while visiting a mountain attraction may not be concrete or objective, and consequently may yield different types of attributes that influence behavior. Social implications The social exchange theory could explain relationships between visitors and service providers and their consequences. Attraction managers should increase benefits for visitors and service providers to enhance their relationships, and thus experience. Originality/value The study explored the applicability of an existing experiential consumption model in a mountain attraction context. The findings introduce a revised model that may be applicable in other tourist attractions.


2017 ◽  
Vol 78 (4) ◽  
pp. 537-568 ◽  
Author(s):  
Huub Hoofs ◽  
Rens van de Schoot ◽  
Nicole W. H. Jansen ◽  
IJmert Kant

Bayesian confirmatory factor analysis (CFA) offers an alternative to frequentist CFA based on, for example, maximum likelihood estimation for the assessment of reliability and validity of educational and psychological measures. For increasing sample sizes, however, the applicability of current fit statistics evaluating model fit within Bayesian CFA is limited. We propose, therefore, a Bayesian variant of the root mean square error of approximation (RMSEA), the BRMSEA. A simulation study was performed with variations in model misspecification, factor loading magnitude, number of indicators, number of factors, and sample size. This showed that the 90% posterior probability interval of the BRMSEA is valid for evaluating model fit in large samples ( N≥ 1,000), using cutoff values for the lower (<.05) and upper limit (<.08) as guideline. An empirical illustration further shows the advantage of the BRMSEA in large sample Bayesian CFA models. In conclusion, it can be stated that the BRMSEA is well suited to evaluate model fit in large sample Bayesian CFA models by taking sample size and model complexity into account.


2018 ◽  
Vol 4 (15) ◽  
pp. 153
Author(s):  
Dr. Purwanto

Factor analysis is a test of construct validity. The test is taken by testing so much items or variables and extracting to be lesser and simpler factors. The extraction is carried by unifying some items or variables having significant common variance as they measure the same dimension. In its application, factor analysis can be exploratory or confirmatory. Exploratory factor analysis is used to understand some factors explaining a variabel that analysis does not work under a hyphotesis. On the other hand, confirmatory factor analysis hyphotezise some factors from some items or variables to guide its work. The analysis runs some steps : testing of analysis property, serving correlation matrix, doing extraction, making rotation, and labeling factors. The results of testing are interpreted in some ways. Data can be analyzed if assumptions are approved. Index of Kaiser Meyer Olkin must be over 0,80. Data must also be normal in Bartlet’s test of sphericity. Items or variables make the same dimension or factor if they have intercolinnearity over 0,20. A factor can be developed if it has eigenvalues more than 1,00. An item support a factor if it has factor loadings more than 0,30. Then, the developed factors are labelled or named according to the characteristic of supporting items.


Sign in / Sign up

Export Citation Format

Share Document