A Simulation Study on the Performance of Different Reliability Estimation Methods

Author(s):  
Ashley Edwards ◽  
Keanan Joyner ◽  
Chris Schatschneider

The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach’s alpha, Omega, Omega Hierarchical, Revelle’s Omega, and Greatest Lower Bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying sample sizes, number of items, population reliabilities, and factor loadings. Under these conditions, alpha and omega yielded the most accurate estimations of the population reliability simulated. Alpha consistently underestimated population reliability and demonstrated evidence for itself as a lower bound. Greater underestimations for alpha were observed when tau equivalence was not met, however, underestimations were small and still provided more accurate estimates than all of the estimators except omega. Estimates of reliability were shown to be impacted by sample size, degree of violation of tau equivalence, population reliability and number of items in a scale. Under the conditions simulated here, estimates quantified by alpha and omega yielded the most accurate reflection of population reliability values. A follow-up regression comparing alpha and omega revealed alpha to be more sensitive to degree of violation of tau equivalence whereas omega was impacted greater by sample size and number of items, especially when population reliability was low.

2021 ◽  
pp. 001316442199418
Author(s):  
Ashley A. Edwards ◽  
Keanan J. Joyner ◽  
Christopher Schatschneider

The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach’s alpha, omega, omega hierarchical, Revelle’s omega, and greatest lower bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying sample sizes, number of items, population reliabilities, and factor loadings. Estimators that have been proposed to replace alpha were compared with the performance of alpha as well as to each other. Estimates of reliability were shown to be affected by sample size, degree of violation of tau equivalence, population reliability, and number of items in a scale. Under the conditions simulated here, estimates quantified by alpha and omega yielded the most accurate reflection of population reliability values. A follow-up regression comparing alpha and omega revealed alpha to be more sensitive to degree of violation of tau equivalence, whereas omega was affected greater by sample size and number of items, especially when population reliability was low.


2021 ◽  
Author(s):  
Abdolvahab Khademi

One desirable property of a measurement process or instrument is the maximum invariance of the results across subpopulations with similar distribution of the traits. Determining measurement invariance (MI) is a statistical procedure in which different methods are used given different factors, such as the nature of the data (e.g. continuous, or discrete, completeness), sample size, measurement framework (e.g. observed scores, latent variable modeling), and other context-specific factors. To evaluate the statistical results, numerical criteria are often used, derived from theory, simulation, or practice. One statistical method to evaluate MI is multiple-group confirmatory factor analysis (MG-CFA) in which the amount of change in fit indices of nested models, such as comparative fit index (CFI), Tucker-Lewis fit index (TLI), and the root mean squared error of approximation (RMSEA), are used to determine if the lack of invariance is non-trivial. Currently, in the MG-CFA framework for establishing MI, the recommended effect size is a change of less than 0.01 in CFI/TLI measures (Cheung & Rensvold, 2002). However, the recommended cutoff value is a very general index and may not be appropriate under some conditions, such as dichotomous indicators, different estimation methods, different sample sizes, and model complexity. In addition, in determining the cutoff value, consequences to the lack of invariance have been ignored in the current research. To address these gaps, the present research undertakes to evaluate the appropriateness of the current effect size of CFI or TLI < 0.01 in educational measurement settings, where the items are dichotomous, the item response functions follow an item response theory (IRT) model, estimation method is robust weighted least squares, and the focal and reference groups differ from each other on the IRT scale by 0.5 units (equivalent to ±1 raw score). A simulation study was performed with five (crossed) factors: percent of differential functioning items, IRT model, IRT a and b parameters, and the sample size. The results of the simulation study showed that the cutoff value of a CFI/TLI < 0.01 for establishing MI is not appropriate for educational settings under the foregoing conditions.


2021 ◽  
Vol 12 ◽  
Author(s):  
Italo Trizano-Hermosilla ◽  
José L. Gálvez-Nieto ◽  
Jesús M. Alvarado ◽  
José L. Saiz ◽  
Sonia Salvo-Garrido

In the context of multidimensional structures, with the presence of a common factor and multiple specific or group factors, estimates of reliability require specific estimators. The use of classical procedures such as the alpha coefficient or omega total that ignore structural complexity are not appropriate, since they can lead to strongly biased estimates. Through a simulation study, the bias of six estimators of reliability in multidimensional measures was evaluated and compared. The study is complemented by an empirical illustration that exemplifies the procedure. Results showed that the estimators with the lowest bias in the estimation of the total reliability parameter are omega total, the two versions of greatest lower bound (GLB) and the alpha coefficient, which in turn are also those that produce the highest overestimation of the reliability of the general factor. Nevertheless, the most appropriate estimators, in that they produce less biased estimates of the reliability parameter of the general factor, are omega limit and omega hierarchical.


2016 ◽  
Vol 27 (5) ◽  
pp. 1476-1497 ◽  
Author(s):  
Simon R White ◽  
Graciela Muniz-Terrera ◽  
Fiona E Matthews

Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.


BMJ Open ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. e033510 ◽  
Author(s):  
Ayako Okuyama ◽  
Matthew Barclay ◽  
Cong Chen ◽  
Takahiro Higashi

ObjectivesThe accuracy of the ascertainment of vital status impacts the validity of cancer survival. This study assesses the potential impact of loss-to-follow-up on survival in Japan, both nationally and in the samples seen at individual hospitals.DesignSimulation studySetting and participantsData of patients diagnosed in 2007, provided by the Hospital-Based Cancer Registries of 177 hospitals throughout Japan.Primary and secondary outcome measuresWe performed simulations for each cancer site, for sample sizes of 100, 1000 and 8000 patients, and for loss-to-follow-up ranging from 1% to 5%. We estimated the average bias and the variation in bias in survival due to loss-to-follow-up.ResultsThe expected bias was not associated with the sample size (with 5% loss-to-follow-up, about 2.1% for the cohort including all cancers), but a smaller sample size led to more variable bias. Sample sizes of around 100 patients, as may be seen at individual hospitals, had very variable bias: with 5% loss-to-follow-up for all cancers, 25% of samples had a bias of <1.02% and 25% of samples had a bias of > 3.06%.ConclusionSurvival should be interpreted with caution when loss-to-follow-up is a concern, especially for poor-prognosis cancers and for small-area estimates.


2015 ◽  
Vol 9 (2) ◽  
pp. 1822-1833
Author(s):  
Murat DoÄŸan

In this study, Monte Carlo simulation is used to evaluate the characteristics of CFA fit indices under different conditions (such as sample size, estimation method and distributional conditions). The simulation study was performed using seven different samples where sample has a different sample size such as 50, 100, 200, 400, 800, 1600, 4000, four different estimation methods (Maximum Likelihood, Generalized Least Square, Least Square and Weighted Least Square) and three distribution conditions (normal, slightly non-normal and moderately non-normal). A simulation study was conducted with EQS software to examine the effect of these conditions on the most common eleven fit indices that are studied in CFA and SEM. As a result of this study, all of the factors studied are shown to have an influence on the fit indices.


Author(s):  
Oumaima Bounou ◽  
Abdellah El Barkany ◽  
Ahmed El Biyaali

Maintenance management is an orderly procedure to address the planning, organization, monitoring and evaluation of maintenance activities and associated costs. The maintenance management allows to have an efficient tool either to the management of the preventive or curative activity, an optimization of the production tool, and finally a follow-up of the costs and the performances. A good maintenance management system can help prevent problems and damages to the operating and storage environment, extend the life of assets, and reduce operating costs.In this paper, we will first present our model on the joint management of spare parts and maintenance. We will do a simulation study of our model, presented in the first section of this paper. The results of this study are presented in the second section through the presentation of the influence of certain parameters of the model on the operation of the system under consideration. This study carried out on the graphical interface of Matlab, which is one of the performance evaluation techniques. It allows to visualize the variations and anomalies which can be reached in the system considered as an overcoming of the repair of the machines by the unforeseen breakdowns.


Sign in / Sign up

Export Citation Format

Share Document