Meta-analysis of coefficient alpha.

2006 ◽  
Vol 11 (3) ◽  
pp. 306-322 ◽  
Author(s):  
Michael C. Rodriguez ◽  
Yukiko Maeda
2011 ◽  
Vol 71 (1) ◽  
pp. 231-244 ◽  
Author(s):  
Denna L. Wheeler ◽  
Matt Vassar ◽  
Jody A. Worley ◽  
Laura L. B. Barnes

2013 ◽  
Vol 4 (2) ◽  
pp. 198-207 ◽  
Author(s):  
Michael T. Brannick ◽  
Nanhua Zhang

2003 ◽  
Vol 93 (3) ◽  
pp. 643-647 ◽  
Author(s):  
Richard A. Charter

Formulae for combining reliability coefficients from any number of samples are provided. These formulae produce the exact reliability one would compute if one had the raw data from the samples. Needed are the sample means, standard deviations, sample sizes, and reliability coefficients. The formulae work for coefficient alpha, KR-20, retest, alternate-forms, split-half, interrater (intraclass), Gilmer-Feldt, Angoff-Feldt, validity, and other coefficients. They may be particularly useful for meta-analytic and reliability generalization studies.


2016 ◽  
Vol 19 (4) ◽  
pp. 244-253 ◽  
Author(s):  
Chin-Pang Lee ◽  
Yu-Wen Chiu ◽  
Chun-Lin Chu ◽  
Yu Chen ◽  
Kun-Hao Jiang ◽  
...  

2021 ◽  
Author(s):  
Qian Zhang

Abstract: A scale to measure a psychological construct is subject to measurement error. When meta-analyzing correlations obtained from scale scores, many researchers recommend correcting measurement error. We considered three caveats when conducting meta-analysis of correlations: (1) the distribution of true scores can be non-normal, resulting in a violation of the normality assumption for raw correlations and Fisher's z transformed correlations; (2) coefficient alpha is often used as the reliability, but correlations corrected for measurement error using alpha can be inaccurate when some assumptions (e.g., tau-equivalence) of alpha are violated; and (3) item scores are often ordinal, making the disattenuation formula potentially problematic. Via three simulation studies, we examined the performance of two meta-analysis approaches with raw correlations and z scores. In terms of estimation accuracy and coverage probability of the mean correlation, results showed that (1) the true score distribution alone had slight influence; (2) when the tau-equivalence assumption was violated and coefficient alpha was used for correcting measurement error, the mean correlation estimate can be biased and coverage probability can be low; and (3) discretization of continuous items can result in under-coverage of the mean correlation even when tau-equivalence was satisfied. With more categories and/or items on a scale, results can improve when tau-equivalence was met or not. Based on these findings, we then gave recommendations when conducting meta-analysis of correlations.


1994 ◽  
Vol 21 (2) ◽  
pp. 381 ◽  
Author(s):  
Robert A. Peterson

2017 ◽  
Vol 21 (3) ◽  
pp. 255-268
Author(s):  
Meghan K. Crouch ◽  
Diane E. Mack ◽  
Philip M. Wilson ◽  
Matthew Y. W. Kwan

Using reliability generalization analysis, the purpose of this study was to characterize the average score reliability, the variability of the score reliability estimates, and explore possible characteristics (e.g., sample size) that influence the reliability of scores across studies using the Scales of Psychological Wellbeing (PWB; Ryff, 1989 , 2014 ). Published studies were included in this investigation if they appeared in a peer-reviewed journal, used 1 or more PWB subscales, estimated coefficient alpha value(s) for the PWB subscale(s), and were written in English. Of the 924 articles generated by the search strategy, a total of 264 were included in the final sample for meta-analysis. The average value reported for coefficient alpha referencing the composite PWB Scale was 0.858, with mean coefficient alphas ranging from 0.722 for the autonomy subscale to 0.801 for the self-acceptance subscale. The 95% prediction intervals ranged from [.653, .996] for the composite PWB. The lower bound of the prediction intervals for specific subscales were >.350. Moderator analyses revealed significant differences in score reliability estimates across select sample and test characteristics. Most notably, R2 values linked with test length ranged from 40% to 71%. Concerns were identified with the use of the 3-item per PWB subscale which reinforces claims advanced by Ryff (2014) . Suggestions for researchers using the PWB are advanced which span measurement considerations and standards of reporting. Psychological researchers who calculate score reliability estimates within their own work should recognize the implications of alpha coefficient values on validity, null hypothesis significant testing, and effect sizes.


PLoS ONE ◽  
2018 ◽  
Vol 13 (12) ◽  
pp. e0208331 ◽  
Author(s):  
Brian K. Miller ◽  
Kay M. Nicols ◽  
Silvia Clark ◽  
Alison Daniels ◽  
Whitney Grant

Sign in / Sign up

Export Citation Format

Share Document