scholarly journals Sensitivity analysis of reliability estimation methods for attribute data to sample size and sampling points of time

2011 ◽  
Vol 12 (2) ◽  
pp. 581-587 ◽  
Author(s):  
Young-Kap Son ◽  
Jang-Hee Ryu
2019 ◽  
Author(s):  
Ashley Edwards ◽  
Keanan Joyner ◽  
Chris Schatschneider

The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach’s alpha, Omega, Omega Hierarchical, Revelle’s Omega, and Greatest Lower Bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying sample sizes, number of items, population reliabilities, and factor loadings. Under these conditions, alpha and omega yielded the most accurate estimations of the population reliability simulated. Alpha consistently underestimated population reliability and demonstrated evidence for itself as a lower bound. Greater underestimations for alpha were observed when tau equivalence was not met, however, underestimations were small and still provided more accurate estimates than all of the estimators except omega. Estimates of reliability were shown to be impacted by sample size, degree of violation of tau equivalence, population reliability and number of items in a scale. Under the conditions simulated here, estimates quantified by alpha and omega yielded the most accurate reflection of population reliability values. A follow-up regression comparing alpha and omega revealed alpha to be more sensitive to degree of violation of tau equivalence whereas omega was impacted greater by sample size and number of items, especially when population reliability was low.


2021 ◽  
pp. 001316442199418
Author(s):  
Ashley A. Edwards ◽  
Keanan J. Joyner ◽  
Christopher Schatschneider

The accuracy of certain internal consistency estimators have been questioned in recent years. The present study tests the accuracy of six reliability estimators (Cronbach’s alpha, omega, omega hierarchical, Revelle’s omega, and greatest lower bound) in 140 simulated conditions of unidimensional continuous data with uncorrelated errors with varying sample sizes, number of items, population reliabilities, and factor loadings. Estimators that have been proposed to replace alpha were compared with the performance of alpha as well as to each other. Estimates of reliability were shown to be affected by sample size, degree of violation of tau equivalence, population reliability, and number of items in a scale. Under the conditions simulated here, estimates quantified by alpha and omega yielded the most accurate reflection of population reliability values. A follow-up regression comparing alpha and omega revealed alpha to be more sensitive to degree of violation of tau equivalence, whereas omega was affected greater by sample size and number of items, especially when population reliability was low.


2021 ◽  
pp. 109442812199908
Author(s):  
Yin Lin

Forced-choice (FC) assessments of noncognitive psychological constructs (e.g., personality, behavioral tendencies) are popular in high-stakes organizational testing scenarios (e.g., informing hiring decisions) due to their enhanced resistance against response distortions (e.g., faking good, impression management). The measurement precisions of FC assessment scores used to inform personnel decisions are of paramount importance in practice. Different types of reliability estimates are reported for FC assessment scores in current publications, while consensus on best practices appears to be lacking. In order to provide understanding and structure around the reporting of FC reliability, this study systematically examined different types of reliability estimation methods for Thurstonian IRT-based FC assessment scores: their theoretical differences were discussed, and their numerical differences were illustrated through a series of simulations and empirical studies. In doing so, this study provides a practical guide for appraising different reliability estimation methods for IRT-based FC assessment scores.


2019 ◽  
Vol 11 (3) ◽  
pp. 168781401983684 ◽  
Author(s):  
Leilei Cao ◽  
Lulu Cao ◽  
Lei Guo ◽  
Kui Liu ◽  
Xin Ding

It is difficult to have enough samples to implement the full-scale life test on the loader drive axle due to high cost. But the extreme small sample size can hardly meet the statistical requirements of the traditional reliability analysis methods. In this work, the method of combining virtual sample expanding with Bootstrap is proposed to evaluate the fatigue reliability of the loader drive axle with extreme small sample. First, the sample size is expanded by virtual augmentation method to meet the requirement of Bootstrap method. Then, a modified Bootstrap method is used to evaluate the fatigue reliability of the expanded sample. Finally, the feasibility and reliability of the method are verified by comparing the results with the semi-empirical estimation method. Moreover, from the practical perspective, the promising result from this study indicates that the proposed method is more efficient than the semi-empirical method. The proposed method provides a new way for the reliability evaluation of costly and complex structures.


Author(s):  
Rodric Mérimé Nonki ◽  
André Lenouo ◽  
Christopher J. Lennard ◽  
Raphael M. Tshimanga ◽  
Clément Tchawoua

AbstractPotential Evapotranspiration (PET) plays a crucial role in water management, including irrigation systems design and management. It is an essential input to hydrological models. Direct measurement of PET is difficult, time-consuming and costly, therefore a number of different methods are used to compute this variable. This study compares the two sensitivity analysis approaches generally used for PET impact assessment on hydrological model performance. We conducted the study in the Upper Benue River Basin (UBRB) located in northern Cameroon using two lumped-conceptual rainfall-runoff models and nineteen PET estimation methods. A Monte-Carlo procedure was implemented to calibrate the hydrological models for each PET input while considering similar objective functions. Although there were notable differences between PET estimation methods, the hydrological models performance was satisfactory for each PET input in the calibration and validation periods. The optimized model parameters were significantly affected by the PET-inputs, especially the parameter responsible to transform PET into actual ET. The hydrological models performance was insensitive to the PET input using a dynamic sensitivity approach, while he was significantly affected using a static sensitivity approach. This means that the over-or under-estimation of PET is compensated by the model parameters during the model recalibration. The model performance was insensitive to the rescaling PET input for both dynamic and static sensitivities approaches. These results demonstrate that the effect of PET input to model performance is necessarily dependent on the sensitivity analysis approach used and suggest that the dynamic approach is more effective for hydrological modeling perspectives.


2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Olivier Delaneau ◽  
Jean-François Zagury ◽  
Matthew R. Robinson ◽  
Jonathan L. Marchini ◽  
Emmanouil T. Dermitzakis

AbstractThe number of human genomes being genotyped or sequenced increases exponentially and efficient haplotype estimation methods able to handle this amount of data are now required. Here we present a method, SHAPEIT4, which substantially improves upon other methods to process large genotype and high coverage sequencing datasets. It notably exhibits sub-linear running times with sample size, provides highly accurate haplotypes and allows integrating external phasing information such as large reference panels of haplotypes, collections of pre-phased variants and long sequencing reads. We provide SHAPEIT4 in an open source format and demonstrate its performance in terms of accuracy and running times on two gold standard datasets: the UK Biobank data and the Genome In A Bottle.


2019 ◽  
Vol 16 (6) ◽  
pp. 673-681 ◽  
Author(s):  
Edward L Korn ◽  
Robert J Gray ◽  
Boris Freidlin

Background: Nonadherence to treatment assignment in a noninferiority randomized trial is especially problematic because it attenuates observed differences between the treatment arms, possibly leading one to conclude erroneously that a truly inferior experimental therapy is noninferior to a standard therapy (inflated type 1 error probability). The Lachin–Foulkes adjustment is an increase in the sample size to account for random nonadherence for the design of a superiority trial with a time-to-event outcome; it has not been explored in the noninferiority trial setting nor with nonrandom nonadherence. Noninferiority trials where patients have knowledge of a personal prognostic risk score may lead to nonrandom nonadherence, as patients with a relatively high risk may be more likely to not adhere to the random assignment to the (reduced) experimental therapy, and patients with a relatively low risk score may be more likely to not adhere to the random assignment to the (more aggressive) standard therapy. Methods: We investigated via simulations the properties of the Lachin–Foulkes adjustment in the noninferiority setting. We considered nonrandom in addition to random nonadherence to the treatment assignment. For nonrandom nonadherence, we used the scenario where a risk score, potentially associated with the between-arm treatment difference, influences patients’ nonadherence. A sensitivity analysis is proposed for addressing the nonrandom nonadherence for this scenario. The noninferiority TAILORx adjuvant breast cancer trial, where eligibility was based on a genomic risk score, is used as an example throughout. Results: The Lachin–Foulkes adjustment to the sample size improves the operating characteristics of noninferiority trials with random nonadherence. However, to maintain type 1 error probability, it is critical to adjust the noninferiorty margin as well as the sample size. With nonrandom nonadherence that is associated with a prognostic risk score, the type 1 error probability of the Lachin–Foulkes adjustment can be inflated (e.g. doubled) when the nonadherence is larger in the experimental arm than the standard arm. The proposed sensitivity analysis lessens the inflation in this situation. Conclusion: The Lachin–Foulkes adjustment to the sample size and noninferiority margin is a useful simple technique for attenuating the effects of random nonadherence in the noninferiority setting. With nonrandom nonadherence associated with a risk score known to the patients, the type 1 error probability can be inflated in certain situations. A proposed sensitivity analysis for these situations can attenuate the inflation.


Author(s):  
Arefeh Nasri ◽  
Lei Zhang ◽  
Junchuan Fan ◽  
Kathleen Stewart ◽  
Hannah Younes ◽  
...  

It is of interest to federal and state agencies to develop an advanced uniform method for estimation of vehicle miles traveled (VMT) on local roads which can be used as a guideline for agencies nationwide. The purpose of this study is to propose advanced innovative approaches for estimating VMT on local roads and analyze the feasibility of applying these methods. The paper presents a methodology and procedure for estimating local road VMT using GPS vehicle trajectory data and an all-street road network and expands these methodologies and results to determine the minimum required GPS sample size. The Federal Highway Administration and other transportation agencies may consider using these methodologies as a future guide to update VMT estimates with minimal additional cost requirements. The key finding of the research is that it is feasible to use new GPS vehicle trajectory data to estimate VMT on non-Federal Aid System roadways. The sample size of this data allows the application of this new method across the nation. The accuracy of this method was tested for the State of Maryland. Once such statewide GPS data is obtained by a given state, the methodology can be easily applied to that state as well.


Sign in / Sign up

Export Citation Format

Share Document