scholarly journals Reliability Estimation in Multidimensional Scales: Comparing the Bias of Six Estimators in Measures With a Bifactor Structure

2021 ◽  
Vol 12 ◽  
Author(s):  
Italo Trizano-Hermosilla ◽  
José L. Gálvez-Nieto ◽  
Jesús M. Alvarado ◽  
José L. Saiz ◽  
Sonia Salvo-Garrido

In the context of multidimensional structures, with the presence of a common factor and multiple specific or group factors, estimates of reliability require specific estimators. The use of classical procedures such as the alpha coefficient or omega total that ignore structural complexity are not appropriate, since they can lead to strongly biased estimates. Through a simulation study, the bias of six estimators of reliability in multidimensional measures was evaluated and compared. The study is complemented by an empirical illustration that exemplifies the procedure. Results showed that the estimators with the lowest bias in the estimation of the total reliability parameter are omega total, the two versions of greatest lower bound (GLB) and the alpha coefficient, which in turn are also those that produce the highest overestimation of the reliability of the general factor. Nevertheless, the most appropriate estimators, in that they produce less biased estimates of the reliability parameter of the general factor, are omega limit and omega hierarchical.

2019 ◽  
Author(s):  
Ashley Edwards ◽  
Keanan Joyner ◽  
Chris Schatschneider

The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach’s alpha, Omega, Omega Hierarchical, Revelle’s Omega, and Greatest Lower Bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying sample sizes, number of items, population reliabilities, and factor loadings. Under these conditions, alpha and omega yielded the most accurate estimations of the population reliability simulated. Alpha consistently underestimated population reliability and demonstrated evidence for itself as a lower bound. Greater underestimations for alpha were observed when tau equivalence was not met, however, underestimations were small and still provided more accurate estimates than all of the estimators except omega. Estimates of reliability were shown to be impacted by sample size, degree of violation of tau equivalence, population reliability and number of items in a scale. Under the conditions simulated here, estimates quantified by alpha and omega yielded the most accurate reflection of population reliability values. A follow-up regression comparing alpha and omega revealed alpha to be more sensitive to degree of violation of tau equivalence whereas omega was impacted greater by sample size and number of items, especially when population reliability was low.


2021 ◽  
pp. 001316442199418
Author(s):  
Ashley A. Edwards ◽  
Keanan J. Joyner ◽  
Christopher Schatschneider

The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach’s alpha, omega, omega hierarchical, Revelle’s omega, and greatest lower bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying sample sizes, number of items, population reliabilities, and factor loadings. Estimators that have been proposed to replace alpha were compared with the performance of alpha as well as to each other. Estimates of reliability were shown to be affected by sample size, degree of violation of tau equivalence, population reliability, and number of items in a scale. Under the conditions simulated here, estimates quantified by alpha and omega yielded the most accurate reflection of population reliability values. A follow-up regression comparing alpha and omega revealed alpha to be more sensitive to degree of violation of tau equivalence, whereas omega was affected greater by sample size and number of items, especially when population reliability was low.


Methodology ◽  
2012 ◽  
Vol 8 (2) ◽  
pp. 71-80 ◽  
Author(s):  
Juan Botella ◽  
Manuel Suero

In Reliability Generalization (RG) meta-analyses, the importance of bearing in mind the problems of range restriction or biased sampling and their influence on reliability estimation has often been highlighted. Nevertheless, the presence of heterogeneous variances in the included studies has been diagnosed in a subjective way and has not been taken into account in later analyses. Procedures to detect the presence of a variety of sampling schemes and to manage them in the analyses are proposed. The procedures are further explained with an example, by applying them to 25 estimates of Cronbach’s alpha coefficient in the Hamilton Scale for Depression.


Methodology ◽  
2016 ◽  
Vol 12 (1) ◽  
pp. 11-20 ◽  
Author(s):  
Gregor Sočan

Abstract. When principal component solutions are compared across two groups, a question arises whether the extracted components have the same interpretation in both populations. The problem can be approached by testing null hypotheses stating that the congruence coefficients between pairs of vectors of component loadings are equal to 1. Chan, Leung, Chan, Ho, and Yung (1999) proposed a bootstrap procedure for testing the hypothesis of perfect congruence between vectors of common factor loadings. We demonstrate that the procedure by Chan et al. is both theoretically and empirically inadequate for the application on principal components. We propose a modification of their procedure, which constructs the resampling space according to the characteristics of the principal component model. The results of a simulation study show satisfactory empirical properties of the modified procedure.


2009 ◽  
Vol 25 (3) ◽  
pp. 873-890 ◽  
Author(s):  
Kazuhiko Hayakawa

In this paper, we show that for panel AR(p) models, an instrumental variable (IV) estimator with instruments deviated from past means has the same asymptotic distribution as the infeasible optimal IV estimator when bothNandT, the dimensions of the cross section and time series, are large. If we assume that the errors are normally distributed, the asymptotic variance of the proposed IV estimator is shown to attain the lower bound when bothNandTare large. A simulation study is conducted to assess the estimator.


2008 ◽  
Vol 68 (6) ◽  
pp. 923-939 ◽  
Author(s):  
Ilse Stuive ◽  
Henk A. L. Kiers ◽  
Marieke E. Timmerman ◽  
Jos M. F. ten Berge

This study compares two confirmatory factor analysis methods on their ability to verify whether correct assignments of items to subtests are supported by the data. The confirmatory common factor (CCF) method is used most often and defines nonzero loadings so that they correspond to the assignment of items to subtests. Another method is the oblique multiple group (OMG) method, which defines subtests as unweighted sums of the scores on all items assigned to the subtest, and (corrected) correlations are used to verify the assignment. A simulation study compares both methods, accounting for the influence of model error and the amount of unique variance. The CCF and OMG methods show similar behavior with relatively small amounts of unique variance and low interfactor correlations. However, at high amounts of unique variance and high interfactor correlations, the CCF detected correct assignments more often, whereas the OMG was better at detecting incorrect assignments.


2017 ◽  
Vol 27 (6) ◽  
pp. 759-773 ◽  
Author(s):  
Riet van Bork ◽  
Sacha Epskamp ◽  
Mijke Rhemtulla ◽  
Denny Borsboom ◽  
Han L. J. van der Maas

Recent research has suggested that a range of psychological disorders may stem from a single underlying common factor, which has been dubbed the p-factor. This finding may spur a line of research in psychopathology very similar to the history of factor modeling in intelligence and, more recently, personality research, in which similar general factors have been proposed. We point out some of the risks of modeling and interpreting general factors, derived from the fields of intelligence and personality research. We argue that: (a) factor-analytic resolution, i.e., convergence of the literature on a particular factor structure, should not be expected in the presence of multiple highly similar models; and (b) the true underlying model may not be a factor model at all, because alternative explanations can account for the correlational structure of psychopathology.


1997 ◽  
Vol 07 (04) ◽  
pp. 831-836 ◽  
Author(s):  
M. O. Kim ◽  
Hoyun Lee ◽  
Chil-Min Kim ◽  
Hyun-Soo Pang ◽  
Eok-Kyun Lee ◽  
...  

We obtained new characteristic relations in Type-II and III intermittencies according to the reinjection probability distribution. When the reinjection probability distribution is fixed at the lower bound of reinjection, the critical exponents are -1, as is well known. However when the reinjection probability distribution is uniform, the critical exponent is -1/2, and when it is of form [Formula: see text], -3/4. On the other hand, if the square root of Δ, which represents the lower bound of reinjection, is much smaller than the control parameter ∊, i.e. ∊ ≫ Δ1/2, critical exponent is always -1, independent of the reinjection probability distribution. Those critical exponents are confirmed by numerical simulation study.


1998 ◽  
Vol 48 (1-2) ◽  
pp. 61-72
Author(s):  
Joydeep Bhanja

In this paper we consider an example where for each i, i = 1,2, ... , n, the observations Xij , j = 1, 2, ... , k are i.i.d . Binomial ( ni, θ). Based on a theory developed by us earlier, we propose estimates of θ which are asymptotically efficient under the assumption that k ≥ 2, the ni 's come from a finite set { 1, 2, ... , q} and some mild regularity conditions on the sequence { ni} and θ hold. We present the results of a simulation whlch indicate, among other thlngs, the asymptotic lower bound to variance is lower than or approximately equal to simulated Variances and a simple moment estimate of θ does as well as the asymptotically efficient estimates.


Author(s):  
Daniel Ondé ◽  
Jesús M. Alvarado ◽  
Santiago Sastre ◽  
Carolina M. Azañedo

(1) Background: Recent studies have shown that the internal structure of TMMS-24 can be conceptualized as a bifactor. However, these studies, based exclusively on the evaluation of the fit of the model, fail to show the existence of a general factor of strong emotional intelligence and have neglected the evaluation of the specific factors of attention, clarity and repair. The main goal of this work is to evaluate the degree of determination and reliability of the specific factors of TMMS-24 using a bifactor S-1 model. (2) Methods: We administered TMMS-24 to a sample of 384 students from middle and high schools (58.1% girls; mean age = 15.5; SD = 1.8). (3) Results: The specific TMMS-24 factors are better determined and present a higher internal consistency than the general factor. Furthermore, the bifactor S-1 model shows the existence of a hierarchical relationship between the attention factor and the clarity and repair factors. The S-1 bifactor model is the only one that was shown to be invariant as a function of the sex of the participants. (4) Conclusions: The S-1 bifactor model has proven to be a promising tool for capturing the structural complexity of TMMS-24. Its application indicates that it is not advisable to use the sum score of the items, since it would be contaminated by the attention factor. In addition, this score would not be invariant either, that is, comparisons by sex would be invalid.


Sign in / Sign up

Export Citation Format

Share Document