A reliability analysis method based on analytical expressions of the first four moments of the surrogate model of the performance function

2018 ◽  
Vol 111 ◽  
pp. 47-67 ◽  
Author(s):  
Yan Shi ◽  
Zhenzhou Lu ◽  
Siyu Chen ◽  
Liyang Xu
Author(s):  
Yanjie Xiao ◽  
Xun'an Zhang ◽  
Ronggang Xue

The seismic reliability calculation of complex building structures requires a lot of simulation analysis and therefore the calculation cost is high. Fitting performance function with surrogate model can improve computational efficiency, but how to ensure the calculation accuracy while improving the reliability analysis efficiency of the engineering structure is a problem worthy of study. This paper proposes a Kriging-based reliability analysis method, which establishes the Kriging surrogate model with fewer calculations of the performance function, improves the accuracy of the surrogate model of performance function by infill-sampling, and obtains the approximate failure probability combined with Monte Carlo simulation. Two numerical examples are analyzed; the results show that this method is efficient and accurate. The method is applied to the seismic reliability calculation of mega-sub controlled structural system, in which the randomness of structure and seismic action is considered. The application results show that it is an effective method for reliability analysis of complex building structures.


2021 ◽  
Vol 144 (3) ◽  
Author(s):  
Dequan Zhang ◽  
Yunfei Liang ◽  
Lixiong Cao ◽  
Jie Liu ◽  
Xu Han

Abstract It is generally understood that intractable computational intensity stemming from repeatedly calling performance function when evaluating the contribution of joint focal elements hinders the application of evidence theory in practical engineering. In order to promote the practicability of evidence theory for the reliability evaluation of engineering structures, an efficient reliability analysis method based on the active learning Kriging model is proposed in this study. To start with, a basic variable is selected according to basic probability assignment (BPA) of evidence variables to divide the evidence space into sub-evidence spaces. Intersection points between the performance function and the sub-evidence spaces are then determined by solving the univariate root-finding problem. Sample points are randomly identified to enhance the accuracy of the subsequently established surrogate model. Initial Kriging model with high approximation accuracy is subsequently established through these intersection points and additional sample points generated by Latin hypercube sampling. An active learning function is employed to sequentially refine the Kriging model with minimal sample points. As a result, belief (Bel) measure and plausibility (Pl) measure are derived efficiently via the surrogate model in the evidence-theory-based reliability analysis. The currently proposed analysis method is exemplified with three numerical examples to demonstrate the efficiency and is applied to reliability analysis of positioning accuracy for an industrial robot.


2012 ◽  
Vol 249-250 ◽  
pp. 589-595
Author(s):  
Feng Yi Lu ◽  
Jin Jin Gao ◽  
Rui Gang Yang ◽  
Ge Ning Xu

The telescopic boom of truck crane has bearing and luffing functions. It not only requires higher carrying capacity, but also requires higher reliability in the various working conditions. For scientific evaluating the reliability of the telescopic boom structure, the stochastic finite element method is used to calculate the structure performance function probability. Taking 50t truck crane telescopic structure as an engineering practical example, the feasibility and practicability of the method have been verified (tested/proved).


Author(s):  
Ungki Lee ◽  
Ikjin Lee

Abstract Reliability analysis that evaluates a probabilistic constraint is an important part of reliability-based design optimization (RBDO). Inverse reliability analysis evaluates the percentile value of the performance function that satisfies the reliability. To compute the percentile value, analytical methods, surrogate model based methods, and sampling-based methods are commonly used. In case the dimension or nonlinearity of the performance function is high, sampling-based methods such as Monte Carlo simulation, Latin hypercube sampling, and importance sampling can be directly used for reliability analysis since no analytical formulation or surrogate model is required in these methods. The sampling-based methods have high accuracy but require a large number of samples, which can be very time-consuming. Therefore, this paper proposes methods that can improve the accuracy of reliability analysis when the number of samples is not enough and the sampling-based methods are considered to be better candidates. This study starts with the idea of training the relationship between the realization of the performance function at a small sample size and the corresponding true percentile value of the performance function. Deep feedforward neural network (DFNN), which is one of the promising artificial neural network models that approximates high dimensional models using deep layered structures, is trained using the realization of various performance functions at a small sample size and the corresponding true percentile values as input and target training data, respectively. In this study, various polynomial functions and random variables are used to create training data sets consisting of various realizations and corresponding true percentile values. A method that approximates the realization of the performance function through kernel density estimation and trains the DFNN with the discrete points representing the shape of the kernel distribution to reduce the dimension of the training input data is also presented. Along with the proposed reliability analysis methods, a strategy that reuses samples of the previous design point to enhance the efficiency of the percentile value estimation is explained. The results show that the reliability analysis using the DFNN is more accurate than the method using only samples. In addition, compared to the method that trains the DFNN using the realization of the performance function, the method that trains the DFNN with the discrete points representing the shape of the kernel distribution improves the accuracy of reliability analysis and reduces the training time. The proposed sample reuse strategy is verified that the burden of function evaluation at the new design point can be reduced by reusing the samples of the previous design point when the design point changes while performing RBDO.


Author(s):  
Qian Wang ◽  
Jun Ji

Metamodeling methods provide useful tools to replace expensive numerical simulations in engineering reliability analysis and design optimization. The radial basis functions (RBFs) and augmented RBFs can be used to create accurate metamodels; therefore they can be integrated with a reliability analysis method such as the Monte Carlo simulations (MCS). However the model accuracy of RBFs depends on the sample size, and the accuracy generally increases as the sample size increases. Since the optimal sample size used to create RBF metamodels is not known before the creation of the models, a sequential RBF metamodeling method was studied. In each iteration of reliability analysis, augmented RBFs were used to generate metamodels of a limit state or performance function, and the failure probability was calculated using MCS. Additional samples were generated in subsequent analysis iterations in order to improve the metamodel accuracy. Numerical examples from literature were solved, and the failure probabilities based on the RBF metamodels were found to have a good accuracy. In addition, only small numbers of iterations were required for the reliability analysis to converge. The proposed method based on sequential RBF metamodels is useful for probabilistic analysis of practical engineering systems.


2018 ◽  
Vol 140 (7) ◽  
Author(s):  
Mohammad Kazem Sadoughi ◽  
Meng Li ◽  
Chao Hu ◽  
Cameron A. MacKenzie ◽  
Soobum Lee ◽  
...  

Reliability analysis involving high-dimensional, computationally expensive, highly nonlinear performance functions is a notoriously challenging problem in simulation-based design under uncertainty. In this paper, we tackle this problem by proposing a new method, high-dimensional reliability analysis (HDRA), in which a surrogate model is built to approximate a performance function that is high dimensional, computationally expensive, implicit, and unknown to the user. HDRA first employs the adaptive univariate dimension reduction (AUDR) method to construct a global surrogate model by adaptively tracking the important dimensions or regions. Then, the sequential exploration–exploitation with dynamic trade-off (SEEDT) method is utilized to locally refine the surrogate model by identifying additional sample points that are close to the critical region (i.e., the limit-state function (LSF)) with high prediction uncertainty. The HDRA method has three advantages: (i) alleviating the curse of dimensionality and adaptively detecting important dimensions; (ii) capturing the interactive effects among variables on the performance function; and (iii) flexibility in choosing the locations of sample points. The performance of the proposed method is tested through three mathematical examples and a real world problem, the results of which suggest that the method can achieve an accurate and computationally efficient estimation of reliability even when the performance function exhibits high dimensionality, high nonlinearity, and strong interactions among variables.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Yixuan Dong ◽  
Shijie Wang

Structural reliability analysis is usually realized based on a multivariate performance function that depicts failure mechanisms of a structural system. The intensively computational cost of the brutal-force Monte-Carlo simulation motivates proposing a Gegenbauer polynomial-based surrogate model for effective structural reliability analysis in this paper. By utilizing the orthogonal matching pursuit algorithm to detect significant explanatory variables at first, a small number of samples are used to determine a reliable approximation result of the structural performance function. Several numerical examples in the literature are presented to demonstrate potential applications of the Gegenbauer polynomial-based sparse surrogate model. Accurate results have justified the effectiveness of the proposed approach in dealing with various structural reliability problems.


2011 ◽  
Vol 90-93 ◽  
pp. 133-136
Author(s):  
Chong Jiang ◽  
Xi Bing Li ◽  
Ke Ping Zhou ◽  
Shan Wei Wang

There is uncertainty during analysis the stability of karst roof under pile tip. The interval numbers are used to express the calculation parameters. Secondly, the limit equilibrium analysis model of karst roof under pile tip is presented based on the present study. Thirdly, the performance function is suggested to evaluate the reliability of the stability of karst roof under pile tip. The non-probabilistic reliability analysis method for stability of karst roof under pile tip is finally founded. This method is proved to be rational and feasible by engineering case analysis.


Author(s):  
Mohammad Kazem Sadoughi ◽  
Meng Li ◽  
Chao Hu ◽  
Cameron A. Mackenzie

Reliability analysis involving high-dimensional, computationally expensive, highly nonlinear performance functions is a notoriously challenging problem. In this paper, we tackle this problem by proposing a new method, high-dimensional reliability analysis (HDRA), in which a surrogate model is built to approximate a performance function that is high dimensional, computationally expensive, implicit and unknown to the user. HDRA first employs the adaptive univariate dimension reduction (AUDR) method to build a global surrogate model by adaptively tracking the important dimensions or regions. Then, the sequential exploration-exploitation with dynamic trade-off (SEEDT) method is utilized to locally refine the surrogate model by identifying additional sample points that are close to the critical region (i.e., the limit-state function) with high prediction uncertainty. The HDRA method has three advantages: (i) alleviating the curse of dimensionality and adaptively detecting important dimensions; (ii) capturing the interactive effects among variables on the performance function; and (iii) flexibility in choosing the locations of sample points. The performance of the proposed method is tested through two mathematical examples, the results of which suggest that the method can achieve accurate and computationally efficient estimation of reliability even when the performance function exhibits high dimensionality, high nonlinearity, and strong interactions among variables.


Sign in / Sign up

Export Citation Format

Share Document