Using multi-group confirmatory factor analysis to evaluate cross-cultural research: identifying and understanding non-invariance

2015 ◽  
Vol 40 (1) ◽  
pp. 66-90 ◽  
Author(s):  
Gavin T.L. Brown ◽  
Lois R. Harris ◽  
Chrissie O'Quin ◽  
Kenneth E. Lane
2019 ◽  
Vol 21 (4) ◽  
pp. 466-483
Author(s):  
Shinhee Jeong ◽  
Yunsoo Lee

The Problem Cross-cultural research has received substantial attention from both academia and practice as it contributes to expand current theory and implements culturally successful human resource strategies. Although the quantity of this type of research has increased, several researchers have raised methodological concerns that the majority of cross-cultural research has simply assumed or ignored measurement invariance. The Solution In this article, we first provide the meaning for measurement invariance, discuss why it is important, and then explain stepwise confirmatory factor analysis procedures to test measurement invariance. We also diagnose the current research practice in the field of human resource development (HRD) based on a review of cross-cultural, comparative research published in the major HRD journals. Finally, we demonstrate that the group difference test results that have been found without ensuring measurement invariance can, in fact, be false. The Stakeholders This article contributes to the HRD literature and practice in two ways. First, HRD researchers are invited to recognize the importance of sophisticated research methodology, such as measurement invariance, and to examine item bias across different groups so they can make a meaningful and valid comparison. The same attention is advisable to any practitioner who attempts to identify group differences using multinational/cultural data. This article also provides HRD scholars and practitioners with specific multigroup confirmatory factor analysis (MGCFA) procedures to facilitate empirical tests of measurement models across different groups and thus disseminate the methodological advances in the field of HRD. It is our hope that the present article raises awareness, circulates relevant knowledge, and encourages more HRD scholars to think critically about measurement.


Author(s):  
Thanh V. Tran ◽  
Tam Nguyen ◽  
Keith Chan

A cross-cultural comparison can be misleading for two reasons: (1) comparison is made using different attributes and (2) comparison is made using different scale units. This chapter illustrates multiple statistical approaches to evaluating the cross-cultural equivalence of the research instruments: data distribution of the items of the research instrument, the patterns of responses of each item, the corrected item–total correlation, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and reliability analysis using the parallel test and tau-equivalence test. Equivalence is the fundamental issue in cross-cultural research and evaluation.


2010 ◽  
Vol 3 (1) ◽  
pp. 111-130 ◽  
Author(s):  
Taciano L. Milfont ◽  
Ronald Fischer

Researchers often compare groups of individuals on psychological variables. When comparing groups an assumption is made that the instrument measures the same psychological construct in all groups. If this assumption holds, the comparisons are valid and differences/similarities between groups can be meaningfully interpreted. If this assumption does not hold, comparisons and interpretations are not fully meaningful. The establishment of measurement invariance is a prerequisite for meaningful comparisons across groups. This paper first reviews the importance of equivalence in psychological research, and then the main theoretical and methodological issues regarding measurement invariance within the framework of confirmatory factor analysis. A step-by-step empirical example of measurement invariance testing is provided along with syntax examples for fitting such models in LISREL.


2021 ◽  
pp. 003329412110051
Author(s):  
Cecilia Brando-Garrido ◽  
Javier Montes-Hidalgo ◽  
Joaquín T. Limonero ◽  
María J. Gómez-Romero ◽  
Joaquín Tomás-Sábado

A recent line of research concerns bedtime procrastination, its effects on sleep quality and duration, and the associated repercussions for health and wellbeing. The Bedtime Procrastination Scale is a brief, self-report instrument developed by Kroese et al. with the aim of evaluating this behavior and exploring its association with insufficient sleep, and hence with health. The aim was to develop and validate a Spanish version of the Bedtime Procrastination Scale (BPS-Sp) and to examine the relationship between bedtime procrastination and both general procrastination and self-control. The original BPS was translated from English into Spanish in accordance with international guidelines on the cross-cultural adaptation of measurement instruments. The sample for the validation study comprised 177 nursing students who completed a questionnaire requesting demographic data and which included the following instruments: the newly developed BPS-Sp, the Tuckman Procrastination Scale, and the Brief Self-Control Scale. Statistical analysis involved tests of normality (Kolmogorov-Smirnov), reliability (Cronbach’s alpha, test-retest), construct validity, and confirmatory factor analysis. Scores on the BPS-Sp showed excellent internal consistency (α = .83) and temporal stability (test-retest r = .84), as well as significant correlations with general procrastination ( r = .26; p < .01) and self-control ( r = −.17; p < .05). Confirmatory factor analysis showed an adequate fit for the single-factor solution proposed by Kroese et al. The results suggest that the BPS-Sp is a valid and reliable instrument for assessing bedtime procrastination in the Spanish-speaking population.


2016 ◽  
Vol 50 (0) ◽  
Author(s):  
Mariana Charantola Silva ◽  
Marina Peduzzi ◽  
Carine Teles Sangaleti ◽  
Dirceu da Silva ◽  
Heloise Fernandes Agreli ◽  
...  

ABSTRACT OBJECTIVE To adapt and validate the Team Climate Inventory scale, of teamwork climate measurement, for the Portuguese language, in the context of primary health care in Brazil. METHODS Methodological study with quantitative approach of cross-cultural adaptation (translation, back-translation, synthesis, expert committee, and pretest) and validation with 497 employees from 72 teams of the Family Health Strategy in the city of Campinas, SP, Southeastern Brazil. We verified reliability by the Cronbach’s alpha, construct validity by the confirmatory factor analysis with SmartPLS software, and correlation by the job satisfaction scale. RESULTS We problematized the overlap of items 9, 11, and 12 of the “participation in the team” factor and the “team goals” factor regarding its definition. The validation showed no overlapping of items and the reliability ranged from 0.92 to 0.93. The confirmatory factor analysis indicated suitability of the proposed model with distribution of the 38 items in the four factors. The correlation between teamwork climate and job satisfaction was significant. CONCLUSIONS The version of the scale in Brazilian Portuguese was validated and can be used in the context of primary health care in the Country, constituting an adequate tool for the assessment and diagnosis of teamwork.


1993 ◽  
Vol 76 (3_suppl) ◽  
pp. 1275-1281 ◽  
Author(s):  
Lynette S. McCullough

Ten humorous television advertisements were shown to 44 Finnish and 68 American university students to investigate whether Freud's two-part humor typology (tendencious/nontendencious) adequately represented the perceptions of both nationalities. Confirmatory factor analysis did not confirm the two-type structure for either nationality, and subsequent exploratory factor analysis indicated different humor perceptions for Finns and Americans Second-order factor analysis yielded an aggressive and a nonsense factor, which suggests that the more reductive two-part structure may exist across cultures.


2021 ◽  
Vol In Press (In Press) ◽  
Author(s):  
Arezoo Paliziyan ◽  
Mehrnaz Mehrabizadeh Honarmand ◽  
Seyed Esmael Hashemi ◽  
Iran Davoudi

Background: Diagnostic questionnaires play a great role in accelerating the diagnosis of mental disorders. Objectives: This study aimed to provide a cross-cultural adaption form of Self-report oppositional defiant behavior inventory (SR-ODBI) in Persian and assess the validity and reliability of this Persian form. Methods: The present study was done on two research samples, including a sample of 294 students who were selected in the school year of 2019 - 2020 (girls and boys) from high schools of Dezful city by multi-stage random sampling method and a sample of 320 parents. The validity of the oppositional defiant behavior inventory was assessed by two methods of confirmatory factor analysis and convergent validity, and the reliability of the inventory was assessed using Cronbach's alpha and split-half methods. Results: Cronbach's alpha was obtained at 0.73 (0.87) for the whole self-report scale (parent version), 0.72 (0.74) for the subscale of irritability, and 0.81 (0.80) for the subscale of stubborn and resentful behavior. The correlation between SR-ODBI and Achenbach Youth Mental Health Test was 0.56 (P < 0.01). The results of confirmatory factor analysis (RMSEA = 0.06 and 0.08) also indicated a relatively good fit of structures of the oppositional defiant behavior inventory. Conclusions: The results of the research indicated that the Persian version of the Oppositional Defiant Behavior Inventory in Iran has good reliability and validity.


1999 ◽  
Vol 25 (1) ◽  
pp. 1-27 ◽  
Author(s):  
Gordon W. Cheung ◽  
Roger B. Rensvold

Many cross-cultural researchers are concerned with factorial invariance; that is, with whether or not members of different cultures associate survey items, or similar measures, with similar constructs. Researchers usually test items for factorial invariance using confirmatory factor analysis (CFA). CFA, however, poses certain problems that must be dealt with. Primary among them is standardization, the process that assigns units of measurement to the constructs (latent variables). Two standardization procedures and several minor variants have been reported in the literature, but using these procedures when testing for factorial invariance can lead to inaccurate results. In this paper we review basic theory, and propose an extension of Byrne, Shavelson, and Muthgn’s (1989) procedure for identifying non-invariant items. The extended procedure solves the standardization problem by performing a systematic comparison of all pairs of factor loadings across groups. A numerical example based upon a large published data set is presented to illustrate the utility of the new procedure, particularly with regard to partial factorial invariance.


Sign in / Sign up

Export Citation Format

Share Document