Assessing and Testing Cross-Cultural Measurement Equivalence

Author(s):  
Thanh V. Tran ◽  
Tam Nguyen ◽  
Keith Chan

A cross-cultural comparison can be misleading for two reasons: (1) comparison is made using different attributes and (2) comparison is made using different scale units. This chapter illustrates multiple statistical approaches to evaluating the cross-cultural equivalence of the research instruments: data distribution of the items of the research instrument, the patterns of responses of each item, the corrected item–total correlation, exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and reliability analysis using the parallel test and tau-equivalence test. Equivalence is the fundamental issue in cross-cultural research and evaluation.

Perception ◽  
1976 ◽  
Vol 5 (3) ◽  
pp. 343-348 ◽  
Author(s):  
Jan B Deregowski

A group of Scottish schoolchildren were tested on a task intended to measure the effect of implicit-shape constancy, and the scores were compared with those obtained from African samples. It was found that both groups were influenced by the implicit-shape constancy although the influence was less in the African sample. The relationship of these findings to other published reports of cross-cultural research into pictorial perception and susceptibility to illusions is discussed.


2010 ◽  
Vol 3 (1) ◽  
pp. 111-130 ◽  
Author(s):  
Taciano L. Milfont ◽  
Ronald Fischer

Researchers often compare groups of individuals on psychological variables. When comparing groups an assumption is made that the instrument measures the same psychological construct in all groups. If this assumption holds, the comparisons are valid and differences/similarities between groups can be meaningfully interpreted. If this assumption does not hold, comparisons and interpretations are not fully meaningful. The establishment of measurement invariance is a prerequisite for meaningful comparisons across groups. This paper first reviews the importance of equivalence in psychological research, and then the main theoretical and methodological issues regarding measurement invariance within the framework of confirmatory factor analysis. A step-by-step empirical example of measurement invariance testing is provided along with syntax examples for fitting such models in LISREL.


2019 ◽  
Vol 21 (4) ◽  
pp. 466-483
Author(s):  
Shinhee Jeong ◽  
Yunsoo Lee

The Problem Cross-cultural research has received substantial attention from both academia and practice as it contributes to expand current theory and implements culturally successful human resource strategies. Although the quantity of this type of research has increased, several researchers have raised methodological concerns that the majority of cross-cultural research has simply assumed or ignored measurement invariance. The Solution In this article, we first provide the meaning for measurement invariance, discuss why it is important, and then explain stepwise confirmatory factor analysis procedures to test measurement invariance. We also diagnose the current research practice in the field of human resource development (HRD) based on a review of cross-cultural, comparative research published in the major HRD journals. Finally, we demonstrate that the group difference test results that have been found without ensuring measurement invariance can, in fact, be false. The Stakeholders This article contributes to the HRD literature and practice in two ways. First, HRD researchers are invited to recognize the importance of sophisticated research methodology, such as measurement invariance, and to examine item bias across different groups so they can make a meaningful and valid comparison. The same attention is advisable to any practitioner who attempts to identify group differences using multinational/cultural data. This article also provides HRD scholars and practitioners with specific multigroup confirmatory factor analysis (MGCFA) procedures to facilitate empirical tests of measurement models across different groups and thus disseminate the methodological advances in the field of HRD. It is our hope that the present article raises awareness, circulates relevant knowledge, and encourages more HRD scholars to think critically about measurement.


2018 ◽  
Vol 34 (2) ◽  
pp. 87-100 ◽  
Author(s):  
Gino Casale ◽  
Robert J. Volpe ◽  
Brian Daniels ◽  
Thomas Hennemann ◽  
Amy M. Briesch ◽  
...  

Abstract. The current study examines the item and scalar equivalence of an abbreviated school-based universal screener that was cross-culturally translated and adapted from English into German. The instrument was designed to assess student behavior problems that impact classroom learning. Participants were 1,346 K-6 grade students from the US (n = 390, Mage = 9.23, 38.5% female) and Germany (n = 956, Mage = 8.04, 40.1% female). Measurement invariance was tested by multigroup confirmatory factor analysis (CFA) across students from the US and Germany. Results support full scalar invariance between students from the US and Germany (df = 266, χ2 = 790.141, Δχ2 = 6.9, p < .001, CFI = 0.976, ΔCFI = 0.000, RMSEA = 0.052, ΔRMSEA = −0.003) indicating that the factor structure, the factor loadings, and the item thresholds are comparable across samples. This finding implies that a full cross-cultural comparison including latent factor means and structural coefficients between the US and the German version of the abbreviated screener is possible. Therefore, the tool can be used in German schools as well as for cross-cultural research purposes between the US and Germany.


Author(s):  
Fons J.R. Van de Vijver ◽  
Jia He

Bias and equivalence provide a framework for methodological aspects of cross-cultural studies. Bias is a generic term for any systematic errors in the measurement that endanger the comparability of cross-cultural data; bias results in invalid comparative conclusions. The demonstration of equivalence (i.e., absence of bias) is a prerequisite for any cross-cultural comparison. Based on the source of incomparability, three types of bias, namely construct, method, and item bias, can be distinguished. Correspondingly, three levels of equivalence, namely, construct, metric, and scalar equivalence, can be distinguished. One of the goals in cross-cultural research is to minimize bias and enhance comparability. The definitions and manifestations of these types of bias and equivalence are described and remedies to minimize bias and enhance equivalence at the design, implementation, and statistical analysis phases of a cross-cultural study are provided. These strategies involve different research features (e.g., decentering and convergence), extensive pilot and pretesting, and various statistical procedures to demonstration of different levels of equivalence and detections of bias (e.g., factor analysis based approaches and differential item functioning analysis). The implications of bias and equivalence also extend to instrument adaptation and combining etic and emic approaches to maximize the ecological validity. Instrument choices in cross-cultural research and the categorization of adaptations stemming from considerations of the concept, culture, language, and measurement are outlined. Examples from cross-cultural research of personality are highlighted to illustrate the importance of combining etic and emic approaches. The professionalization and broadening of the field is expected to increase the validity of conclusions regarding cross-cultural similarities and differences.


2017 ◽  
Vol 6 (3) ◽  
pp. 181
Author(s):  
Munevver Ilgun Dibek ◽  
Hatice C Yavuz ◽  
Ezel Tavsancil ◽  
Seher Yalcin

The purpose of the present study was twofold: first to adapt the Relationship and Motivation (REMO) scale addressing role of peers and teachers in students’ motivations into Turkish culture, and second to determine whether there were any differences between girls and boys regarding the scores obtained from this scale. To achieve these aims, the present research was designed to be comprised of three consecutive studies. In Study 1, linguistic equivalence was established, and results of an Explanatory Factor Analysis (EFA) performed on data obtained from 202 students showed that structure of the original scale was supported. In Study 2, a Confirmatory Factor Analysis (CFA) was conducted using data obtained from 496 Turkish students, and the results confirmed the results of EFA. Additionally, the validity evidence was obtained by conducting another EFA with 528 students. Moreover, reliability coefficients were also found to be varying in an acceptable range. Including the same participants of Study 2 in Study 3, t-test results showed that girls had significantly higher mean scores on the subscales of peers and teachers as positive motivators, and teachers as negative motivators. On the other hand, boys had significantly higher mean scores on the scale of peers as negative motivators. Results of these studies suggest that Turkish version of REMO is conceptually equivalent to original REMO, and similarly reliable and valid. Therefore, the adapted scale can not only be used in cross-cultural comparison and but also for determining the differentiation in the relations of students with their peers and teachers.


Psihologija ◽  
2010 ◽  
Vol 43 (2) ◽  
pp. 121-136 ◽  
Author(s):  
Milos Kankaras ◽  
Guy Moors

In cross-cultural comparative studies it is essential to establish equivalent measurement of relevant constructs across cultures. If this equivalence is not confirmed it is difficult if not impossible to make meaningful comparison of results across countries. This work presents concept of measurement equivalence, its relationship with other related concepts, different equivalence levels and causes of inequivalence in cross-cultural research. It also reviews three main approaches to the analysis of measurement equivalence - multigroup confirmatory factor analysis, differential item functioning, and multigroup latent class analysis - with special emphasis on their similarities and differences, as well as comparative advantages.


Sign in / Sign up

Export Citation Format

Share Document