High-Dimensional Reliability Analysis of Engineered Systems Involving Computationally Expensive Black-Box Simulations

Author(s):  
Mohammad Kazem Sadoughi ◽  
Meng Li ◽  
Chao Hu ◽  
Cameron A. Mackenzie

Reliability analysis involving high-dimensional, computationally expensive, highly nonlinear performance functions is a notoriously challenging problem. In this paper, we tackle this problem by proposing a new method, high-dimensional reliability analysis (HDRA), in which a surrogate model is built to approximate a performance function that is high dimensional, computationally expensive, implicit and unknown to the user. HDRA first employs the adaptive univariate dimension reduction (AUDR) method to build a global surrogate model by adaptively tracking the important dimensions or regions. Then, the sequential exploration-exploitation with dynamic trade-off (SEEDT) method is utilized to locally refine the surrogate model by identifying additional sample points that are close to the critical region (i.e., the limit-state function) with high prediction uncertainty. The HDRA method has three advantages: (i) alleviating the curse of dimensionality and adaptively detecting important dimensions; (ii) capturing the interactive effects among variables on the performance function; and (iii) flexibility in choosing the locations of sample points. The performance of the proposed method is tested through two mathematical examples, the results of which suggest that the method can achieve accurate and computationally efficient estimation of reliability even when the performance function exhibits high dimensionality, high nonlinearity, and strong interactions among variables.

2018 ◽  
Vol 140 (7) ◽  
Author(s):  
Mohammad Kazem Sadoughi ◽  
Meng Li ◽  
Chao Hu ◽  
Cameron A. MacKenzie ◽  
Soobum Lee ◽  
...  

Reliability analysis involving high-dimensional, computationally expensive, highly nonlinear performance functions is a notoriously challenging problem in simulation-based design under uncertainty. In this paper, we tackle this problem by proposing a new method, high-dimensional reliability analysis (HDRA), in which a surrogate model is built to approximate a performance function that is high dimensional, computationally expensive, implicit, and unknown to the user. HDRA first employs the adaptive univariate dimension reduction (AUDR) method to construct a global surrogate model by adaptively tracking the important dimensions or regions. Then, the sequential exploration–exploitation with dynamic trade-off (SEEDT) method is utilized to locally refine the surrogate model by identifying additional sample points that are close to the critical region (i.e., the limit-state function (LSF)) with high prediction uncertainty. The HDRA method has three advantages: (i) alleviating the curse of dimensionality and adaptively detecting important dimensions; (ii) capturing the interactive effects among variables on the performance function; and (iii) flexibility in choosing the locations of sample points. The performance of the proposed method is tested through three mathematical examples and a real world problem, the results of which suggest that the method can achieve an accurate and computationally efficient estimation of reliability even when the performance function exhibits high dimensionality, high nonlinearity, and strong interactions among variables.


2016 ◽  
Vol 138 (12) ◽  
Author(s):  
Zhifu Zhu ◽  
Xiaoping Du

Reliability analysis is time consuming, and high efficiency could be maintained through the integration of the Kriging method and Monte Carlo simulation (MCS). This Kriging-based MCS reduces the computational cost by building a surrogate model to replace the original limit-state function through MCS. The objective of this research is to further improve the efficiency of reliability analysis with a new strategy for building the surrogate model. The major approach used in this research is to refine (update) the surrogate model by accounting for the full information available from the Kriging method. The existing Kriging-based MCS uses only partial information. Higher efficiency is achieved by the following strategies: (1) a new formulation defined by the expectation of the probability of failure at all the MCS sample points, (2) the use of a new learning function to choose training points (TPs). The learning function accounts for dependencies between Kriging predictions at all the MCS samples, thereby resulting in more effective TPs, and (3) the employment of a new convergence criterion. The new method is suitable for highly nonlinear limit-state functions for which the traditional first- and second-order reliability methods (FORM and SORM) are not accurate. Its performance is compared with that of existing Kriging-based MCS method through five examples.


2021 ◽  
Author(s):  
Silvia J. Sarmiento Nova ◽  
Jaime Gonzalez-Libreros ◽  
Gabriel Sas ◽  
Rafael A. Sanabria Díaz ◽  
Maria C. A. Texeira da Silva ◽  
...  

<p>The Response Surface Method (RSM) has become an essential tool to solve structural reliability problems due to its accuracy, efficacy, and facility for coupling with Nonlinear Finite Element Analysis (NLFEA). In this paper, some strategies to improve the RSM efficacy without compromising its accuracy are tested. Initially, each strategy is implemented to assess the safety level of a highly nonlinear explicit limit state function. The strategy with the best results is then identified and used to carry out a reliability analysis of a prestressed concrete bridge, considering the nonlinear material behavior through NLFEA simulation. The calculated value of &#120573; is compared with the target value established in Eurocode for ULS. The results showed how RSM can be a practical methodology and how the improvements presented can reduce the computational cost of a traditional RSM giving a good alternative to simulation methods such as Monte Carlo.</p>


Author(s):  
Zequn Wang ◽  
Mingyang Li

Abstract Conventional uncertainty quantification methods usually lacks the capability of dealing with high-dimensional problems due to the curse of dimensionality. This paper presents a semi-supervised learning framework for dimension reduction and reliability analysis. An autoencoder is first adopted for mapping the high-dimensional space into a low-dimensional latent space, which contains a distinguishable failure surface. Then a deep feedforward neural network (DFN) is utilized to learn the mapping relationship and reconstruct the latent space, while the Gaussian process (GP) modeling technique is used to build the surrogate model of the transformed limit state function. During the training process of the DFN, the discrepancy between the actual and reconstructed latent space is minimized through semi-supervised learning for ensuring the accuracy. Both labeled and unlabeled samples are utilized for defining the loss function of the DFN. Evolutionary algorithm is adopted to train the DFN, then the Monte Carlo simulation method is used for uncertainty quantification and reliability analysis based on the proposed framework. The effectiveness is demonstrated through a mathematical example.


Author(s):  
Debiao Meng ◽  
Hong-Zhong Huang ◽  
Huanwei Xu ◽  
Xiaoling Zhang ◽  
Yan-Feng Li

In Reliability based Multidisciplinary Design and Optimization (RBMDO), saddlepoint approximation has been utilized to improve reliability evaluation accuracy while sustaining high efficiency. However, it requires that not only involved random variables should be tractable; but also a saddlepoint can be obtained easily by solving the so-called saddlepoint equation. In practical engineering, a random variable may be intractable; or it is difficult to solve a highly nonlinear saddlepoint equation with complicated Cumulant Generating Function (CGF). To deal with these challenges, an efficient RBMDO method using Third-Moment Saddlepoint Approximation (TMSA) is proposed in this study. TMSA can construct a concise CGF using the first three statistical moments of a limit state function easily, and then express the probability density function and cumulative distribution function of the limit state function approximately using this concise CGF. To further improve the efficiency of RBMDO, a sequential optimization and reliability analysis strategy is also utilized and a formula of RBMDO using TMSA within the framework of SORA is proposed. Two examples are given to show the effectiveness of the proposed method.


Author(s):  
Tong Zou ◽  
Zissimos P. Mourelatos ◽  
Sankaran Mahadevan ◽  
Jian Tu

Reliability analysis methods are commonly used in engineering design, in order to meet reliability and quality measures. An accurate and efficient computational method is presented for reliability analysis of engineering systems at both the component and system levels. The method can easily handle implicit, highly nonlinear limit-state functions, with correlated or non-correlated random variables, which are described by any probabilistic distribution. It is based on a constructed response surface of an indicator function, which determines the “failure” and “safe” regions, according to the performance function. A Monte Carlo simulation (MCS) calculates the probability of failure based on a response surface of the indicator function, instead of the computationally expensive limit-state function. The Cross-Validated Moving Least Squares (CVMLS) method is used to construct the response surface of the indicator function, based on an Optimum Symmetric Latin Hypercube (OSLH) sampling technique. A number of numerical examples highlight the superior accuracy and efficiency of the proposed method over commonly used reliability methods.


Author(s):  
Hyeongjin Song ◽  
K. K. Choi ◽  
Ikjin Lee ◽  
Liang Zhao ◽  
David Lamb

In this study, an efficient classification methodology is developed for reliability analysis while maintaining the accuracy level similar to or better than existing response surface methods. The sampling-based reliability analysis requires only the classification information — a success or a failure – but the response surface methods provide real function values as their output, which requires more computational effort. The problem is even more challenging to deal with high-dimensional problems due to the curse of dimensionality. In the newly proposed virtual support vector machine (VSVM), virtual samples are generated near the limit state function by using linear or Kriging-based approximations. The exact function values are used for approximations of virtual samples to improve accuracy of the resulting VSVM decision function. By introducing the virtual samples, VSVM can overcome the deficiency in existing classification methods where only classified function values are used as their input. The universal Kriging method is used to obtain virtual samples to improve the accuracy of the decision function for highly nonlinear problems. A sequential sampling strategy that chooses a new sample near the true limit state function is integrated with VSVM to maximize the accuracy. Examples show the proposed adaptive VSVM yields better efficiency in terms of the modeling time and the number of required samples while maintaining similar level or better accuracy especially for high-dimensional problems.


2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Yu Wang ◽  
Xiongqing Yu ◽  
Xiaoping Du

A new reliability-based design optimization (RBDO) method based on support vector machines (SVM) and the Most Probable Point (MPP) is proposed in this work. SVM is used to create a surrogate model of the limit-state function at the MPP with the gradient information in the reliability analysis. This guarantees that the surrogate model not only passes through the MPP but also is tangent to the limit-state function at the MPP. Then, importance sampling (IS) is used to calculate the probability of failure based on the surrogate model. This treatment significantly improves the accuracy of reliability analysis. For RBDO, the Sequential Optimization and Reliability Assessment (SORA) is employed as well, which decouples deterministic optimization from the reliability analysis. The improved SVM-based reliability analysis is used to amend the error from linear approximation for limit-state function in SORA. A mathematical example and a simplified aircraft wing design demonstrate that the improved SVM-based reliability analysis is more accurate than FORM and needs less training points than the Monte Carlo simulation and that the proposed optimization strategy is efficient.


Author(s):  
Linxiong Hong ◽  
Huacong Li ◽  
Kai Peng ◽  
Hongliang Xiao

Aiming at the problems of implicit and highly nonlinear limit state function in the process of reliability analysis of mechanical products, a reliability analysis method of mechanical structures based on Kriging model and improved EGO active learning strategy is proposed. For the problem that the traditional EGO method cannot effectively select points in the limit state surface region, an improved EGO method is proposed. By dealing with the predicted values of sample point model with absolute values and assume that the distribution state of response values remains the same, the work focus of active learning selection points is moved to the vicinity, where the points are with larger prediction variance or close to the limit state surface. Three examples show that, compared with the classical active learning method, the proposed method has good global and local search ability, and can estimate the exact failure probability value under the condition of less calculation of the limit state function.


2021 ◽  
Vol 144 (3) ◽  
Author(s):  
Dequan Zhang ◽  
Yunfei Liang ◽  
Lixiong Cao ◽  
Jie Liu ◽  
Xu Han

Abstract It is generally understood that intractable computational intensity stemming from repeatedly calling performance function when evaluating the contribution of joint focal elements hinders the application of evidence theory in practical engineering. In order to promote the practicability of evidence theory for the reliability evaluation of engineering structures, an efficient reliability analysis method based on the active learning Kriging model is proposed in this study. To start with, a basic variable is selected according to basic probability assignment (BPA) of evidence variables to divide the evidence space into sub-evidence spaces. Intersection points between the performance function and the sub-evidence spaces are then determined by solving the univariate root-finding problem. Sample points are randomly identified to enhance the accuracy of the subsequently established surrogate model. Initial Kriging model with high approximation accuracy is subsequently established through these intersection points and additional sample points generated by Latin hypercube sampling. An active learning function is employed to sequentially refine the Kriging model with minimal sample points. As a result, belief (Bel) measure and plausibility (Pl) measure are derived efficiently via the surrogate model in the evidence-theory-based reliability analysis. The currently proposed analysis method is exemplified with three numerical examples to demonstrate the efficiency and is applied to reliability analysis of positioning accuracy for an industrial robot.


Sign in / Sign up

Export Citation Format

Share Document