scholarly journals Compiling Measurement Invariant Short Scales in Cross–Cultural Personality Assessment Using Ant Colony Optimization

2020 ◽  
Vol 34 (3) ◽  
pp. 470-485 ◽  
Author(s):  
Kristin Jankowsky ◽  
Gabriel Olaru ◽  
Ulrich Schroeders

Examining the influence of culture on personality and its unbiased assessment is the main subject of cross–cultural personality research. Recent large–scale studies exploring personality differences across cultures share substantial methodological and psychometric shortcomings that render it difficult to differentiate between method and trait variance. One prominent example is the implicit assumption of cross–cultural measurement invariance in personality questionnaires. In the rare instances where measurement invariance across cultures was tested, scalar measurement invariance—which is required for unbiased mean–level comparisons of personality traits—did not hold. In this article, we present an item sampling procedure, ant colony optimization, which can be used to select item sets that satisfy multiple psychometric requirements including model fit, reliability, and measurement invariance. We constructed short scales of the IPIP–NEO–300 for a group of countries that are culturally similar (USA, Australia, Canada, and UK) as well as a group of countries with distinct cultures (USA, India, Singapore, and Sweden). In addition to examining factor mean differences across countries, we provide recommendations for cross–cultural research in general. From a methodological perspective, we demonstrate ant colony optimization's versatility and flexibility as an item sampling procedure to derive measurement invariant scales for cross–cultural research. © 2020 The Authors. European Journal of Personality published by John Wiley & Sons Ltd on behalf of European Association of Personality Psychology

PLoS ONE ◽  
2019 ◽  
Vol 14 (2) ◽  
pp. e0211819 ◽  
Author(s):  
Gabriel Olaru ◽  
Oliver Wilhelm ◽  
Steven Nordin ◽  
Michael Witthöft ◽  
Ferenc Köteles

2017 ◽  
Vol 33 (4) ◽  
pp. 297-313 ◽  
Author(s):  
T. E. Virtanen ◽  
P. Moreira ◽  
H. Ulvseth ◽  
H. Andersson ◽  
S. Tetler ◽  
...  

The promotion of students’ engagement with school is an internationally acknowledged challenge in education. There is a need to examine the structure of the concept of student engagement and to discover the best practices for fostering it across societies. That is why the cross-cultural invariance testing of students’ engagement measures is highly needed. This study aimed, first, to find the reduced set of theoretically valid items to represent students’ affective and cognitive engagement forming the Brief-SEI (brief version of the Student Engagement Instrument; SEI). The second aim was to test the measurement invariance of the Brief-SEI across three countries (Denmark, Finland, and Portugal). A total of 4,437 seventh-grade students completed the SEI questionnaires in the three countries. The analyses revealed that of the total 33 original instrument items, 15 items indicated acceptable psychometric properties of the Brief-SEI. With these 15 items, cross-national factorial validity and invariances across genders and students with different levels of academic performance (samples from Finland and Portugal) were demonstrated. This article discusses the utility of the Brief-SEI in cross-cultural research and its applicability in different national school contexts.


2010 ◽  
Vol 3 (1) ◽  
pp. 111-130 ◽  
Author(s):  
Taciano L. Milfont ◽  
Ronald Fischer

Researchers often compare groups of individuals on psychological variables. When comparing groups an assumption is made that the instrument measures the same psychological construct in all groups. If this assumption holds, the comparisons are valid and differences/similarities between groups can be meaningfully interpreted. If this assumption does not hold, comparisons and interpretations are not fully meaningful. The establishment of measurement invariance is a prerequisite for meaningful comparisons across groups. This paper first reviews the importance of equivalence in psychological research, and then the main theoretical and methodological issues regarding measurement invariance within the framework of confirmatory factor analysis. A step-by-step empirical example of measurement invariance testing is provided along with syntax examples for fitting such models in LISREL.


2019 ◽  
Vol 21 (4) ◽  
pp. 466-483
Author(s):  
Shinhee Jeong ◽  
Yunsoo Lee

The Problem Cross-cultural research has received substantial attention from both academia and practice as it contributes to expand current theory and implements culturally successful human resource strategies. Although the quantity of this type of research has increased, several researchers have raised methodological concerns that the majority of cross-cultural research has simply assumed or ignored measurement invariance. The Solution In this article, we first provide the meaning for measurement invariance, discuss why it is important, and then explain stepwise confirmatory factor analysis procedures to test measurement invariance. We also diagnose the current research practice in the field of human resource development (HRD) based on a review of cross-cultural, comparative research published in the major HRD journals. Finally, we demonstrate that the group difference test results that have been found without ensuring measurement invariance can, in fact, be false. The Stakeholders This article contributes to the HRD literature and practice in two ways. First, HRD researchers are invited to recognize the importance of sophisticated research methodology, such as measurement invariance, and to examine item bias across different groups so they can make a meaningful and valid comparison. The same attention is advisable to any practitioner who attempts to identify group differences using multinational/cultural data. This article also provides HRD scholars and practitioners with specific multigroup confirmatory factor analysis (MGCFA) procedures to facilitate empirical tests of measurement models across different groups and thus disseminate the methodological advances in the field of HRD. It is our hope that the present article raises awareness, circulates relevant knowledge, and encourages more HRD scholars to think critically about measurement.


2020 ◽  
Vol 54 (4) ◽  
pp. 323-345 ◽  
Author(s):  
Tony Xing Tan ◽  
Zhiyao Yi ◽  
Eunsook Kim ◽  
Zhengjie Li ◽  
Ke Cheng

In this study, we illustrated issues related to measure invariance in cross-cultural research involving instrument translation between Chinese and English. We translated and back-translated the third edition of the Behavioral Assessment for Children-Self Report of Personality (BASC-3-SRP) and administered it to 1,574 youth in China and 512 youth in the United States. We found that despite a rigorous approach to achieving linguistic equivalence, statistically demonstrating acceptable internal consistency and construct validity, measurement invariance tests revealed that six of the 16 BASC-3-SRP subscales lacked measurement invariance. Constructs for the first three of the six subscales that lacked measurement invariance (i.e., Negative Attitude toward School, Negative Attitude toward Teachers, and Self-Esteem) are known to be conceptualized differently in collectivistic societies, while constructs for the second three subscales (i.e., Atypicality, Sense of Inadequacy, and Hyperactivity) lacked measurement invariance without known cultural reasons. These results highlight instrument development issues and measurement variance issues that cross-cultural researchers must grapple with.


2021 ◽  
Author(s):  
David Lacko ◽  
Jiří Čeněk ◽  
Jaroslav Točík ◽  
Andreja Avsec ◽  
Vladimir Đorđević ◽  
...  

Individualism and collectivism are some of the most widely applied concepts in cultural and cross-cultural research. They are commonly applied by scholars who use arithmetic means or sum indexes of items on a scale to examine the potential similarities and differences in samples from various countries. For many reasons, cross-cultural research implicates numerous methodological and statistical pitfalls. The aim of this article is to summarize some of those pitfalls, particularly the problem of measurement (non)invariance, which stems from the different understandings of questionnaire items or even different character of constructs between countries. This potential bias (i.e., systematic measurement error) is reduced by latent mean comparisons performed with Multigroup Confirmatory Factor Analysis and the Measurement Invariance procedure within a Structural Equation Modeling framework. These procedures have been neglected by many researchers in the field of cross-cultural psychology, however. In this article we compare “traditional” (comparison of arithmetic means) and “invariant” (latent means comparisons) approaches. Both approaches are demonstrated with data gathered on an Independent and Interdependent Self-Scale from 1386 participants across six countries (Slovenia, Croatia, Bosnia and Herzegovina, Serbia, Macedonia, and Albania). Our results revealed considerable differences between the “invariant” and “traditional” approach, especially in post-hoc analyses. Since “invariant” results can be considered less biased, this finding suggests that the currently prevalent method of comparing the arithmetic means of cross-cultural scales of individualism and collectivism can potentially cause systematically biased results. As we demonstrate, the evaluation of measurement invariance properties of scales used in cross-cultural research has the potential to improve the validity of cross-cultural comparisons of individualism and collectivism.


2018 ◽  
Vol 49 (5) ◽  
pp. 691-712 ◽  
Author(s):  
Ronald Fischer ◽  
Ype H. Poortinga

We address methodological challenges in cross-cultural and cultural psychology. First, we describe weaknesses in (quasi-)experimental designs, noting that cross-cultural designs typically do not allow any conclusive evidence of causality. Second, we argue that loose adherence to methodological principles of psychology and a focus on differences, while neglecting similarities, is distorting the literature. We highlight the importance of effect sizes and discuss the role of Bayesian statistics and meta-analysis for cross-cultural research. Third, we highlight issues of measurement bias and lack of equivalence, but note that recent large-scale projects involving researchers across many countries from the beginning of a study have much potential for overcoming biases and improving standards of equivalence. Fourth, we address some implications of multilevel models. Cultural processes are multilevel by definition and recent statistical advances can be used to explore these issues further. We believe this is an area where much theoretical work needs to be done and more rigorous methods applied. Fifth, we argue that the definition of culture and the psychological organization of cross-cultural differences as well as the definition of cultural populations to which research findings are generalized requires more attention. Sixth, we address the scope for anchoring cross-cultural research in biological variables and by asking multiple questions simultaneously, as advocated by Tinbergen for classical ethology. Bringing these discussions together, we provide recommendations for enhancing the methodological strength of culture-comparative studies to advance cross-cultural psychology as a scientific discipline.


Sign in / Sign up

Export Citation Format

Share Document