Comparing Fit and Reliability Estimates of a Psychological Instrument using Second-Order CFA, Bifactor, and Essentially Tau-Equivalent (Coefficient Alpha) Models via AMOS 22

2014 ◽  
Vol 33 (5) ◽  
pp. 451-472 ◽  
Author(s):  
Ryan A. Black ◽  
Yanyun Yang ◽  
Danette Beitra ◽  
Stacey McCaffrey
2020 ◽  
Vol 3 (4) ◽  
pp. 484-501
Author(s):  
David B. Flora

Measurement quality has recently been highlighted as an important concern for advancing a cumulative psychological science. An implication is that researchers should move beyond mechanistically reporting coefficient alpha toward more carefully assessing the internal structure and reliability of multi-item scales. Yet a researcher may be discouraged upon discovering that a prominent alternative to alpha, namely, coefficient omega, can be calculated in a variety of ways. In this Tutorial, I alleviate this potential confusion by describing alternative forms of omega and providing guidelines for choosing an appropriate omega estimate pertaining to the measurement of a target construct represented with a confirmatory factor analysis model. Several applied examples demonstrate how to compute different forms of omega in R.


Methodology ◽  
2013 ◽  
Vol 9 (1) ◽  
pp. 30-40 ◽  
Author(s):  
Fei Gu ◽  
Todd D. Little ◽  
Neal M. Kingston

Coefficient alpha (α) has been described as a lower bound for test reliability. However, previous research indicates that when certain assumptions are violated, α can either overestimate or underestimate reliability. Raykov (1997a) has shown how structural equation modeling (SEM) can be used to estimate reliability. This study has introduced method factors into the model in Raykov (1997a) to avoid a potential limitation of the SEM approach. Monte Carlo simulation shows that when certain assumptions are violated, either method (α or SEM) can show a substantial bias, though in the most extreme circumstances the bias of α estimates are larger than the bias of SEM-based reliability estimates. Circumstances that favor one method or the other are described and explored.


1987 ◽  
Vol 64 (2) ◽  
pp. 628-630 ◽  
Author(s):  
Edwin E. Wagner ◽  
Debra Marsico ◽  
Holiday Adair ◽  
Ralph A. Alexander

Two groups of items were deliberately selected from a standard psychometrically sound test so that both had the same mean value for ail the interitem correlations yet they differed in terms of the variance among the interitem correlations. The two groups were then treated as separate tests, and all possible split-half correlations were computed for each “test.” The more heterogeneous group resulted in a wider spread of split-half correlations. Furthermore, both groups of correlations showed a significant skew in the negative direction. It was concluded that an odd-even split and coefficient alpha significantly underestimate a test's true split-half reliability, as defined by the reliability coefficient generated by using the two most analogous forms of a test.


2017 ◽  
Vol 78 (6) ◽  
pp. 1123-1135
Author(s):  
Tenko Raykov ◽  
Philippe Goldammer ◽  
George A. Marcoulides ◽  
Tatyana Li ◽  
Natalja Menold

A readily applicable procedure is discussed that allows evaluation of the discrepancy between the popular coefficient alpha and the reliability coefficient of a scale with second-order factorial structure that is frequently of relevance in empirical educational and psychological research. The approach is developed within the framework of the widely used latent variable modeling methodology and permits point and interval estimation of the slippage of alpha from scale reliability in a population under investigation. The method is useful when examining the consistency of complex structure measuring instruments assessing higher order latent constructs and, under its assumptions, represents a generally recommendable alternative to coefficient alpha. The outlined procedure is illustrated using data from an authoritarianism study.


2017 ◽  
Vol 21 (3) ◽  
pp. 255-268
Author(s):  
Meghan K. Crouch ◽  
Diane E. Mack ◽  
Philip M. Wilson ◽  
Matthew Y. W. Kwan

Using reliability generalization analysis, the purpose of this study was to characterize the average score reliability, the variability of the score reliability estimates, and explore possible characteristics (e.g., sample size) that influence the reliability of scores across studies using the Scales of Psychological Wellbeing (PWB; Ryff, 1989 , 2014 ). Published studies were included in this investigation if they appeared in a peer-reviewed journal, used 1 or more PWB subscales, estimated coefficient alpha value(s) for the PWB subscale(s), and were written in English. Of the 924 articles generated by the search strategy, a total of 264 were included in the final sample for meta-analysis. The average value reported for coefficient alpha referencing the composite PWB Scale was 0.858, with mean coefficient alphas ranging from 0.722 for the autonomy subscale to 0.801 for the self-acceptance subscale. The 95% prediction intervals ranged from [.653, .996] for the composite PWB. The lower bound of the prediction intervals for specific subscales were >.350. Moderator analyses revealed significant differences in score reliability estimates across select sample and test characteristics. Most notably, R2 values linked with test length ranged from 40% to 71%. Concerns were identified with the use of the 3-item per PWB subscale which reinforces claims advanced by Ryff (2014) . Suggestions for researchers using the PWB are advanced which span measurement considerations and standards of reporting. Psychological researchers who calculate score reliability estimates within their own work should recognize the implications of alpha coefficient values on validity, null hypothesis significant testing, and effect sizes.


1989 ◽  
pp. 223-227 ◽  
Author(s):  
Ulrich BOURGUND ◽  
Munehisa FUJITA ◽  
Srüdiger Rackwitz

Sign in / Sign up

Export Citation Format

Share Document