scholarly journals Psychometric Validation of the Toronto Mindfulness Scale – Trait Version in Chinese College Students

2014 ◽  
Vol 10 (4) ◽  
pp. 726-739 ◽  
Author(s):  
Pak-Kwong Chung ◽  
Chun-Qing Zhang

The Toronto Mindfulness Scale (TMS; Lau et al., 2006) has been widely used to assess the state mindfulness of participants after practicing mindfulness. Recently, a trait version of the Toronto Mindfulness Scale was developed and initially validated (TMS-T; Davis et al., 2009). We further examined the psychometric properties of TMS-T using three hundred and sixty-eight Chinese college students (233 females and 135 males) from a public university in Hong Kong. We found that factor analyses failed to support the existence of two-dimensional structure of the Chinese version of the TMS-T (C-TMS-T). The model fit indices indicated a marginal model fit, and the concurrent and convergent validities of the C-TMS-T were not confirmed. The moderate item-to-subscale fit of the decentering subscale indicated that its structural validity was not satisfactory. In addition, the internal consistency coefficient of the decentering subscale using composite reliability (p = .61) was under the acceptable level. Based on the results, we concluded that the application of the C-TMS-T to the Chinese population is premature. Further validation of the C-TMS-T using another sample of participants is recommended, in particular, individuals with meditation experiences.

2018 ◽  
Vol 15 (4) ◽  
pp. 2407
Author(s):  
Yeşim Bayrakdaroglu ◽  
Dursun Katkat

The purpose of this study is to research how marketing activities of international sports organizations are performed and to develop a scale determining the effects of image management on public. The audiences of interuniversity World Winter Olympic sheld in Erzurum in 2011 participated in the research. Explanatory and Confirmatory Factor Analysis, reliability analysis were performed over the data obtained. All model fit indices of 25-item and four-factor structure of quality-image scale perceived in sports organizations applied were found to be at good level. In line with the findings obtained from the explanatory and confirmatory factor analyses and reliability analysis, it can be uttered that the scale is a valid and reliable measurement tool that can be used in field researches.


2021 ◽  
Vol 49 (7) ◽  
pp. 1-10
Author(s):  
Jiaxi Peng ◽  
Yongmei Xiao ◽  
Yijun Li ◽  
Wei Liang ◽  
Hao Sun ◽  
...  

Currently, there is no instrument to quickly measure adult attachment in the Chinese cultural context. In this study the Experiences in Close Relationships Scale–Short Form (ECR-S) was translated and tested in terms of reliability and validity with Chinese college students. All items of the Chinese-version ECR-S showed high discriminability and the scale had a two-dimensional structure in both exploratory and confirmatory factor analyses. The internal consistency coefficients of the two subscales of the ECR-S showed excellent reliability, and scores were modestly to highly correlated with the criteria of state adult attachment, self-esteem, anxiety, pressure, depression, and satisfaction with intimate (romantic) relationships. It can be concluded that the Chinese version of the ECR-S has high reliability and validity; thus, it meets the requirements for psychometric tools and can be used to assess Chinese adults' attachment.


2018 ◽  
Vol 18 (3) ◽  
Author(s):  
Pablo Ezequiel Flores-Kanter ◽  
Sergio Dominguez-Lara ◽  
Mario Alberto Trógolo ◽  
Leonardo Adrián Medrano

<p>Bifactor models have gained increasing popularity in the literature concerned with personality, psychopathology and assessment. Empirical studies using bifactor analysis generally judge the estimated model using SEM model fit indices, which may lead to erroneous interpretations and conclusions. To address this problem, several researchers have proposed multiple criteria to assess bifactor models, such as a) conceptual grounds, b) overall model fit indices, and c) specific bifactor model indicators. In this article, we provide a brief summary of these criteria. An example using data gathered from a recently published research article is also provided to show how taking into account all criteria, rather than solely SEM model fit indices, may prevent researchers from drawing wrong conclusions.</p>


2020 ◽  
pp. 073428292093092 ◽  
Author(s):  
Patrícia Silva Lúcio ◽  
Joachim Vandekerckhove ◽  
Guilherme V. Polanczyk ◽  
Hugo Cogo-Moreira

The present study compares the fit of two- and three-parameter logistic (2PL and 3PL) models of item response theory in the performance of preschool children on the Raven’s Colored Progressive Matrices. The test of Raven is widely used for evaluating nonverbal intelligence of factor g. Studies comparing models with real data are scarce on the literature and this is the first to compare models of two and three parameters for the test of Raven, evaluating the informational gain of considering guessing probability. Participants were 582 Brazilian’s preschool children ( Mage = 57 months; SD = 7 months; 46% female) who responded individually to the instrument. The model fit indices suggested that the 2PL fit better to the data. The difficulty and ability parameters were similar between the models, with almost perfect correlations. Differences were observed in terms of discrimination and test information. The principle of parsimony must be called for comparing models.


2020 ◽  
pp. 001316442094289
Author(s):  
Amanda K. Montoya ◽  
Michael C. Edwards

Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the ubiquity of correlated residuals and imperfect model specification. Our research focuses on a scale evaluation context and the performance of four standard model fit indices: root mean square error of approximate (RMSEA), standardized root mean square residual (SRMR), comparative fit index (CFI), and Tucker–Lewis index (TLI), and two equivalence test-based model fit indices: RMSEAt and CFIt. We use Monte Carlo simulation to generate and analyze data based on a substantive example using the positive and negative affective schedule ( N = 1,000). We systematically vary the number and magnitude of correlated residuals as well as nonspecific misspecification, to evaluate the impact on model fit indices in fitting a two-factor exploratory factor analysis. Our results show that all fit indices, except SRMR, are overly sensitive to correlated residuals and nonspecific error, resulting in solutions that are overfactored. SRMR performed well, consistently selecting the correct number of factors; however, previous research suggests it does not perform well with categorical data. In general, we do not recommend using model fit indices to select number of factors in a scale evaluation framework.


2020 ◽  
Author(s):  
Amanda Kay Montoya ◽  
Michael C. Edwards

Model fit indices are being increasingly recommended and used to select the number of factors in an exploratory factor analysis. Growing evidence suggests that the recommended cutoff values for common model fit indices are not appropriate for use in an exploratory factor analysis context. A particularly prominent problem in scale evaluation is the ubiquity of correlated residuals and imperfect model specification. Our research focuses on a scale evaluation context and the performance of four standard model fit indices: root mean squared error of approximate (RMSEA), standardized root mean squared residual (SRMR), comparative fit index (CFI), and Tucker-Lewis index (TLI), and two equivalence test-based model fit indices: RMSEAt and CFIt. We use Monte Carlo simulation to generate and analyze data based on a substantive example using the positive and negative affective schedule (N = 1000). We systematically vary the number and magnitude of correlated residuals as well as nonspecific misspecification, to evaluate the impact on model fit indices in fitting a two-factor EFA. Our results show that all fit indices, except SRMR, are overly sensitive to correlated residuals and nonspecific error, resulting in solutions which are over-factored. SRMR performed well, consistently selecting the correct number of factors; however, previous research suggests it does not perform well with categorical data. In general, we do not recommend using model fit indices to select number of factors in a scale evaluation framework.


2017 ◽  
Vol 48 (1) ◽  
pp. 21-31 ◽  
Author(s):  
Angelina Wilson ◽  
Marié P Wissing ◽  
Lusilda Schutte

Although there has been extensive research on the phenomenon of stress, there is still a lack of assessment tools, especially in the South African context, that have strong theoretical underpinnings, tapping into both internal depletion of resources and the excessive external demands from the environment in the measurement of stress. The aim of this study was to validate the Setswana version of the original 30-item long form of the Stress Overload Scale as well as the 10-item short form (Stress Overload Scale–Short Form), both evaluating experienced personal vulnerability and external event load. A sample of N = 376 adults living in a rural community in the Northern Cape Province of South Africa were randomly selected to partake in the study. Emerging model fit indices of confirmatory factor analysis testing the hypothesized two-factor structure of the original Stress Overload Scale were not convincingly good. However, we found a remarkable improvement in model fit indices in the case of the Stress Overload Scale–Short Form. Concurrent validity was shown for the Stress Overload Scale–Short Form in significant correlations with depression and emotional well-being. We conclude that the Setswana version of the Stress Overload Scale–Short Form is a psychometrically sound instrument for measuring stress in the present context; however, further validation of the original Stress Overload Scale in diverse samples is necessary to provide stronger support for the hypothesized two-factor structure.


Sign in / Sign up

Export Citation Format

Share Document