estimate reliability
Recently Published Documents


TOTAL DOCUMENTS

80
(FIVE YEARS 26)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
pp. 4892-4902
Author(s):  
Sarah A. Jabr ◽  
Nada S. Karam

In this paper, the reliability of the stress-strength model is derived for probability P(Y<X) of a component having its strength X exposed to one independent stress Y, when X and Y are following Gompertz Fréchet distribution with unknown shape parameters and known parameters . Different methods were used to estimate reliability R and Gompertz Fréchet distribution parameters, which are maximum likelihood, least square, weighted least square, regression, and ranked set sampling. Also, a comparison of these estimators was made by a simulation study based on mean square error (MSE) criteria. The comparison confirms that the performance of the maximum likelihood estimator is better than that of the other estimators.


2021 ◽  
pp. 1085-1095
Author(s):  
Jonathan Bieler ◽  
Christian Pozzorini ◽  
Jessica Garcia ◽  
Alex C. Tuck ◽  
Morgane Macheret ◽  
...  

PURPOSE The ability of next-generation sequencing (NGS) assays to interrogate thousands of genomic loci has revolutionized genetic testing. However, translation to the clinic is impeded by false-negative results that pose a risk to patients. In response, regulatory bodies are calling for reliability measures to be reported alongside NGS results. Existing methods to estimate reliability do not account for sample- and position-specific variability, which can be significant. Here, we report an approach that computes reliability metrics for every genomic position and sample interrogated by an NGS assay. METHODS Our approach predicts the limit of detection (LOD), the lowest reliably detectable variant fraction, by taking technical factors into account. We initially explored how LOD is affected by input material amount, library conversion rate, sequencing coverage, and sequencing error rate. This revealed that LOD depends heavily on genomic context and sample properties. Using these insights, we developed a computational approach to predict LOD on the basis of a biophysical model of the NGS workflow. We focused on targeted assays for cell-free DNA, but, in principle, this approach applies to any NGS assay. RESULTS We validated our approach by showing that it accurately predicts LOD and distinguishes reliable from unreliable results when screening 580 lung cancer samples for actionable mutations. Compared with a standard variant calling workflow, our approach avoided most false negatives and improved interassay concordance from 94% to 99%. CONCLUSION Our approach, which we name LAVA (LOD-aware variant analysis), reports the LOD for every position and sample interrogated by an NGS assay. This enables reliable results to be identified and improves the transparency and safety of genetic tests.


Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2286
Author(s):  
Yohan Ko

From early design phases to final release, the reliability of modern embedded systems against soft errors should be carefully considered. Several schemes have been proposed to protect embedded systems against soft errors, but they are neither always functional nor robust, even with expensive overhead in terms of hardware area, performance, and power consumption. Thus, system designers need to estimate reliability quantitatively to apply appropriate protection techniques for resource-constrained embedded systems. Vulnerability modeling based on lifetime analysis is one of the most efficient ways to quantify system reliability against soft errors. However, lifetime analysis can be inaccurate, mainly because it fails to comprehensively capture several system-level masking effects. This study analyzes and characterizes microarchitecture-level and software-level masking effects by developing an automated framework with exhaustive fault injections (i.e., soft errors) based on a cycle-accurate gem5 simulator. We injected faults into a register file because errors in the register file can easily be propagated to other components in a processor. We found that only 5% of injected faults can cause system failures on an average over benchmarks, mainly from the MiBench suite. Further analyses showed that 71% of soft errors are overwritten by write operations before being used, and the CPU does not use 20% of soft errors at all after fault injections. The remainder are also masked by several software-level masking effects, such as dynamically dead instructions, compare and logical instructions that do not change the result, and incorrect control flows that do not affect program outputs.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Leili Salehi ◽  
Zoherh Mohmoodi ◽  
Fatemeh Rajati ◽  
Victor Pop

Abstract Background Pregnancy distress is a combination of anxiety, stress, and depression during pregnancy. The first step in preventing pregnancy distress is to identify women at risk. The present study assessed adaptation and psychometric adequency of the Persian Adapted Version of Tilburg Pregnancy Distress Scale (P-TPDS). Methods By Brislin’s translation guidelines, TPDS was translated to Persian. This was followed by determining the face validity of P-TPDS and evaluating construct validity using exploratory and confirmatory factor analyses. The Cronbach’s alpha coefficients and intra-class correlation coefficient (ICC) were used to estimate reliability. Results A final 16-item scale was loaded on four distinct constructs jointly accounting for 59.62% of variance. The factors were labelled as delivery-related worries, partner involvement, pregnancy-related worries, and social-related worries. The alpha coefficients for P-TPDS subscales ranged from 0.85 to 0.91 and ICC ranged from 0.70 to 0.77. All comparative indices of the model including CFI, IFI, NFI, and NNFI were above 0.9 showing the goodness of fit for the data with a RMSEA of 0.04, lower bound: 0.038. Conclusions The Persian adapted version of TPDS (P-TPDS) is a reliable and valid scale for assessing pregnancy distress among pregnant women in Iran.


Dependability ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 23-33
Author(s):  
Kapil Naithani ◽  
Rajesh Dangwal

Aim. In healthcare field there exist different types of uncertainty due to medical error generated by human and technologies. In general the crisp value generate loss of precision and inaccuracy about result and therefore the available data is not sufficient to assessed clinical process up to desired degree of accuracy. Therefore fuzzy set theory play as an important and advance role in accuracy of results in healthcare related problems. Methods. Here for more accuracy of result, we use functional fuzzy numbers in this paper. This study uses a new fuzzy fault tree analysis for patient safety risk modelling in healthcare. In this paper we will use level (λ, ρ) interval-valued triangular fuzzy number, their functional, t-norm operation and centre of gravity defuzzification method to evaluate fuzzy failure probability and estimate reliability of system. The effectiveness of these methods is illustrated by an example related to healthcare problems and then we analyse the result obtained with the other existing techniques. Tanaka et al.’s approach has been used to give the rank of basic events of the considered problems. Also, we use functional of fuzzy numbers to analyse the change in fuzzy failure probability. Results. The paper examines the application of the failure tree, t-norm and functional fuzzy numbers in the context of interval-valued triangular fuzzy numbers. The research examined two types of healthcare-specific problems and the corresponding defuzzification techniques for the purpose of reliability analysis using the existing methods. The authors concluded that t-norm is not associated with significant accumulation and identified how a functional fuzzy number affects reliability. Similarly, using the V index method, the least critical events were found for each system.


Author(s):  
Alexandru Cernat ◽  
Peter Lugtig ◽  
Nicole Watson ◽  
S.C. Noah Uhrig

The quasi-simplex model (QSM) makes use of at least three repeated measures of the same variable to estimate reliability. The model has rather strict assumptions and ignoring them may bias estimates of reliability. While some previous studies have outlined how several of its assumptions can be relaxed, they have not been exhaustive and systematic. Thus, it is unclear what all the assumptions are and how to test and free them in practice. This chapter will addresses this situation by presenting the main assumptions of the quasi-simplex model and the ways in which users can relax these with relative ease when more than three waves are available. Additionally, by using data from the British Household Panel Survey we show how this is practically done and highlight the potential biases found when ignoring the violations of the assumptions. We conclude that relaxing the assumptions should be implemented routinely when more than three waves of data are available.


2021 ◽  
Vol 1795 (1) ◽  
pp. 012020
Author(s):  
Nabeel A Hussein ◽  
Ahmed H Hussain ◽  
Sameer A Abbas ◽  
Abbas M Salman

2020 ◽  
Vol 1701 ◽  
pp. 012002
Author(s):  
Abdul Awal Rana ◽  
DS Samokhin ◽  
A K M Shahabuddin ◽  
Zihad Ul Haque ◽  
Sadbi Ahmad Sanam

Author(s):  
Bader S Alanazi

In this paper, we compare two-stage sequential sampling scheme with fully sequential sampling scheme to test software and estimate reliability. In two-stage sampling scheme, test cases can be allocated among partitions in two phases. Our goal of this scheme is to obtain the near-optimal choices for distributing of test cases among sub-domains by minimizing the variance of the overall software reliability estimator. The two-stage sampling scheme is expected to be more convenient than a fully sequential sampling scheme because it requires fewer computations than the fully sequential sampling scheme. Also, the two-stage sampling scheme is expected to perform better than a balanced sampling scheme by virtue of lower the variance incurred by the overall estimated software reliability


Sign in / Sign up

Export Citation Format

Share Document