TEST PLAN ALLOCATION TO MINIMIZE SYSTEM RELIABILITY ESTIMATION VARIABILITY

Author(s):  
JOSE E. RAMIREZ-MARQUEZ ◽  
DAVID W. COIT ◽  
TONGDAN JIN

A new methodology is presented to allocate testing units to the different components within a system when the system configuration is fixed and there are budgetary constraints limiting the amount of testing. The objective is to allocate additional testing units so that the variance of the system reliability estimate, at the conclusion of testing, will be minimized. Testing at the component-level decreases the variance of the component reliability estimate, which then decreases the system reliability estimate variance. The difficulty is to decide which components to test given the system-level implications of component reliability estimation. The results are enlightening because the components that most directly affect the system reliability estimation variance are often not those components with the highest initial uncertainty. The approach presented here can be applied to any system structure that can be decomposed into a series-parallel or parallel-series system with independent component reliability estimates. It is demonstrated using a series-parallel system as an example. The planned testing is to be allocated and conducted iteratively in distinct sequential testing runs so that the component and system reliability estimates improve as the overall testing progresses. For each run, a nonlinear programming problem must be solved based on the results of all previous runs. The testing allocation process is demonstrated on two examples.

2018 ◽  
Vol 140 (10) ◽  
Author(s):  
Zhen Hu ◽  
Zissimos P. Mourelatos

Testing of components at higher-than-nominal stress level provides an effective way of reducing the required testing effort for system reliability assessment. Due to various reasons, not all components are directly testable in practice. The missing information of untestable components poses significant challenges to the accurate evaluation of system reliability. This paper proposes a sequential accelerated life testing (SALT) design framework for system reliability assessment of systems with untestable components. In the proposed framework, system-level tests are employed in conjunction with component-level tests to effectively reduce the uncertainty in the system reliability evaluation. To minimize the number of system-level tests, which are much more expensive than the component-level tests, the accelerated life testing (ALT) design is performed sequentially. In each design cycle, testing resources are allocated to component-level or system-level tests according to the uncertainty analysis from system reliability evaluation. The component-level or system-level testing information obtained from the optimized testing plans is then aggregated to obtain the overall system reliability estimate using Bayesian methods. The aggregation of component-level and system-level testing information allows for an effective uncertainty reduction in the system reliability evaluation. Results of two numerical examples demonstrate the effectiveness of the proposed method.


Author(s):  
Zhengwei Hu ◽  
Xiaoping Du

System reliability is usually predicted with the assumption that all component states are independent. This assumption may not accurate for systems with outsourced components since their states are strongly dependent and component details may be unknown. The purpose of this study is to develop an accurate system reliability method that can produce complete joint probability density function (PDF) of all the component states, thereby leading to accurate system reliability predictions. The proposed method works for systems whose failures are caused by excessive loading. In addition to the component reliability, system designers also ask for partial safety factors for shared loadings from component suppliers. The information is then sufficient for building a system-level joint PDF. Algorithms are designed for a component supplier to generate partial safety factors. The method enables accurate system reliability predictions without requiring proprietary information from component suppliers.


Author(s):  
M. XIE ◽  
T.N. GOH

In this paper the problem of system-level reliability growth estimation using component-level failure data is studied. It is suggested that system failure data should be broken down into component, or subsystem, failure data when the above problems have occurred during the system testing phase. The proposed approach is especially useful when the system is not unchanged over the time, when some subsystems are improved more than others, or when the testing has been concentrated on different components at different time. These situations usually happen in practice and it may also be the case even if the system failure data is provided. Two sets of data are used to illustrate the simple approach; one is a set of component failure data for which all subsystems are available for testing at the same time and for the other set of data, the starting times are different for different subsystems.


2008 ◽  
Vol 130 (2) ◽  
Author(s):  
M. McDonald ◽  
S. Mahadevan

Reliability-based design optimization (RBDO) of mechanical systems is computationally intensive due to the presence of two types of iterative procedures—design optimization and reliability estimation. Single-loop RBDO algorithms offer tremendous savings in computational effort, but they have so far only been able to consider individual component reliability constraints. This paper presents a single-loop RBDO formulation and an equivalent formulation that can also include system-level reliability constraints. The formulations allow the allocation of optimal reliability levels to individual component limit states in order to satisfy both system-level and component-level reliability requirements. Four solution algorithms to implement the second, more efficient formulation are developed. A key feature of these algorithms is to remove the most probable points from the decision space, thus avoiding the need to calculate Hessians or gradients of limit state gradients. It is shown that with the proposed methods, system-level RBDO can be accomplished with computational expense equivalent to several cycles of computationally inexpensive single-loop RBDO based on second-moment methods. Examples of this new approach applied to series, parallel, and combined systems are provided.


1996 ◽  
Vol 33 (02) ◽  
pp. 548-556 ◽  
Author(s):  
Fan C. Meng

More applications of the principle for interchanging components due to Boland et al. (1989) in reliability theory are presented. In the context of active redundancy improvement we show that if two nodes are permutation equivalent then allocating a redundancy component to the weaker position always results in a larger increase in system reliability, which generalizes a previous result due to Boland et al. (1992). In the case of standby redundancy enhancement, we prove that a series (parallel) system is the only system for which standby redundancy at the component level is always more (less) effective than at the system level. Finally, the principle for interchanging components is extended from binary systems to the more complicated multistate systems.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Amer Ibrahim Al-Omari ◽  
Amal S. Hassan ◽  
Naif Alotaibi ◽  
Mansour Shrahili ◽  
Heba F. Nagy

In survival analysis, the two-parameter inverse Lomax distribution is an important lifetime distribution. In this study, the estimation of R = P   Y < X is investigated when the stress and strength random variables are independent inverse Lomax distribution. Using the maximum likelihood approach, we obtain the R estimator via simple random sample (SRS), ranked set sampling (RSS), and extreme ranked set sampling (ERSS) methods. Four different estimators are developed under the ERSS framework. Two estimators are obtained when both strength and stress populations have the same set size. The two other estimators are obtained when both strength and stress distributions have dissimilar set sizes. Through a simulation experiment, the suggested estimates are compared to the corresponding under SRS. Also, the reliability estimates via ERSS method are compared to those under RSS scheme. It is found that the reliability estimate based on RSS and ERSS schemes is more efficient than the equivalent using SRS based on the same number of measured units. The reliability estimates based on RSS scheme are more appropriate than the others in most situations. For small even set size, the reliability estimate via ERSS scheme is more efficient than those under RSS and SRS. However, in a few cases, reliability estimates via ERSS method are more accurate than using RSS and SRS schemes.


Author(s):  
Yunhui Hou

Abstract In this article, a method is proposed to conduct a global sensitivity analysis of epistemic uncertainty on both system input and system structure, which is very common in early stage of system development, using Dempster-Shafer theory (DST). In system reliability assessment, the input corresponds to component reliability and system structure is given by system reliability function, cut sets, or truth table. A method to propagate real-number mass function through set-valued mappings is introduced and applied on system reliability calculation. Secondly, we propose a method to model uncertain system with multiple possible structures and how to obtain the mass function of system level reliability. Finally, we propose an indicator for global sensibility analysis. Our method is illustrated, and its efficacy is proved by numerical application on two case studies.


2012 ◽  
Vol 548 ◽  
pp. 489-494
Author(s):  
Zhao Jun Yang ◽  
Wei Wang ◽  
Fei Chen ◽  
Kai Wang ◽  
Xiao Bing Li ◽  
...  

By using the information entropy theory, a solution to Weibull-small sample prior distribution of system reliability is proposed, which aims at solving the reliability estimation of high-end CNC. Firstly, the prior information is converted from subsystem level into system level based on entropy theory. Then, the prior distribution is solved with the constrained maximum entropy method. Finally, multi-information is fused based on the entropy weighs. It is proved by a case example that this method can obtained the prior distribution under Webull-small sample effectively.


2019 ◽  
Vol 142 (3) ◽  
Author(s):  
Kassem Moustafa ◽  
Zhen Hu ◽  
Zissimos P. Mourelatos ◽  
Igor Baseski ◽  
Monica Majcher

Abstract Accelerated life test (ALT) has been widely used to accelerate the product reliability assessment process by testing a product at higher than nominal stress conditions. For a system with multiple components, the tests can be performed at component-level or system-level. The data at these two levels require different amount of resources to collect and carry different values of information for system reliability assessment. Even though component-level tests are cheap to perform, they cannot account for the correlations between the failure time distributions of different components. While system-level tests can naturally account for the complicated dependence between component failure time distributions, the required testing efforts are much higher than that of component-level tests. This research proposes a novel resource allocation framework for ALT-based system reliability assessment. A physics-informed load model is first employed to bridge the gap between component-level tests and system-level tests. An optimization framework is then developed to effectively allocate testing resources to different types of tests. The information fusion of component-level and system-level tests allows us to accurately estimate the system reliability with a minimized requirement on the testing resources. Results of two numerical examples demonstrate the effectiveness of the proposed framework.


Sign in / Sign up

Export Citation Format

Share Document