Advanced time-dependent reliability analysis based on adaptive sampling region with Kriging model

Author(s):  
Yan Shi ◽  
Zhenzhou Lu ◽  
Ruyang He

Aiming at accurately and efficiently estimating the time-dependent failure probability, a novel time-dependent reliability analysis method based on active learning Kriging model is proposed. Although active surrogate model methods have been used to estimate the time-dependent failure probability, efficiently estimating the time-dependent failure probability by a fewer computational time remains an issue because screening all the candidate samples iteratively by the active surrogate model is time-consuming. This article is intended to address this issue by establishing an optimization strategy to search the new training samples for updating the surrogate model. The optimization strategy is performed in the adaptive sampling region which is first proposed. The adaptive sampling region is adjustable by the current surrogate model in order to provide a proper candidate samples region of the input variables. The proposed method employs the optimization strategy to select the optimal sample to be the new training sample point in each iteration, and it does not need to predict the values of all the candidate samples at every time instant in each iterative step. Several examples are introduced to illustrate the accuracy and efficiency of the proposed method for estimating the time-dependent failure probability by simultaneously considering the computational cost and precision.

Author(s):  
Zhenliang Yu ◽  
Zhili Sun ◽  
Runan Cao ◽  
Jian Wang ◽  
Yutao Yan

To improve the efficiency and accuracy of reliability assessment for structures with small failure probability and time-consuming simulation, a new structural reliability analysis method (RCA-PCK) is proposed, which combines PC-Kriging model and radial centralized adaptive sampling strategy. Firstly, the PC-Kriging model is constructed by improving the basis function of Kriging model with sparse polynomials. Then, the sampling region which contributes a great impact on the failure probability is constructed by combining the radial concentration and important sampling technology. Subsequently, the k-means++ clustering technology and learning function LIF are adopted to select new training samples from each subdomains in each iteration. To avoid the sampling distance in one subdomain or the distance between the new training samples in two subdomains being too small, we construct a screening mechanism to ensure that the selected new training samples are evenly distributed in the limit state. In addition, a new convergence criterion is derived based on the relative error estimation of failure probability. Four benchmark examples are given to illustrate the convergence process, accuracy and stability of the proposed method. Finally, the transmission error reliability analysis of thermal-elastic coupled gears is carried out to prove the applicability of the proposed method RCA-PCK to the structures with strong nonlinearity and time-consuming simulation.


2021 ◽  
Author(s):  
Carlo Cristiano Stabile ◽  
Marco Barbiero ◽  
Giorgio Fighera ◽  
Laura Dovera

Abstract Optimizing well locations for a green field is critical to mitigate development risks. Performing such workflows with reservoir simulations is very challenging due to the huge computational cost. Proxy models can instead provide accurate estimates at a fraction of the computing time. This study presents an application of new generation functional proxies to optimize the well locations in a real oil field with respect to the actualized oil production on all the different geological realizations. Proxies are built with the Universal Trace Kriging and are functional in time allowing to actualize oil flows over the asset lifetime. Proxies are trained on the reservoir simulations using randomly sampled well locations. Two proxies are created for a pessimistic model (P10) and a mid-case model (P50) to capture the geological uncertainties. The optimization step uses the Non-dominated Sorting Genetic Algorithm, with discounted oil productions of the two proxies, as objective functions. An adaptive approach was employed: optimized points found from a first optimization were used to re-train the proxy models and a second run of optimization was performed. The methodology was applied on a real oil reservoir to optimize the location of four vertical production wells and compared against reference locations. 111 geological realizations were available, in which one relevant uncertainty is the presence of possible compartments. The decision space represented by the horizontal translation vectors for each well was sampled using Plackett-Burman and Latin-Hypercube designs. A first application produced a proxy with poor predictive quality. Redrawing the areas to avoid overlaps and to confine the decision space of each well in one compartment, improved the quality. This suggests that the proxy predictive ability deteriorates in presence of highly non-linear responses caused by sealing faults or by well interchanging positions. We then followed a 2-step adaptive approach: a first optimization was performed and the resulting Pareto front was validated with reservoir simulations; to further improve the proxy quality in this region of the decision space, the validated Pareto front points were added to the initial dataset to retrain the proxy and consequently rerun the optimization. The final well locations were validated on all 111 realizations with reservoir simulations and resulted in an overall increase of the discounted production of about 5% compared to the reference development strategy. The adaptive approach, combined with functional proxy, proved to be successful in improving the workflow by purposefully increasing the training set samples with data points able to enhance the optimization step effectiveness. Each optimization run performed relies on about 1 million proxy evaluations which required negligible computational time. The same workflow carried out with standard reservoir simulations would have been practically unfeasible.


2019 ◽  
Vol 44 (21) ◽  
pp. 11033-11046 ◽  
Author(s):  
Yu-Cai Zhang ◽  
Min-Jie Lu ◽  
Wenchun Jiang ◽  
Shan-Tung Tu ◽  
Xian-Cheng Zhang

Author(s):  
Zhen Hu ◽  
Sankaran Mahadevan ◽  
Xiaoping Du

Limited data of stochastic load processes and system random variables result in uncertainty in the results of time-dependent reliability analysis. An uncertainty quantification (UQ) framework is developed in this paper for time-dependent reliability analysis in the presence of data uncertainty. The Bayesian approach is employed to model the epistemic uncertainty sources in random variables and stochastic processes. A straightforward formulation of UQ in time-dependent reliability analysis results in a double-loop implementation procedure, which is computationally expensive. This paper proposes an efficient method for the UQ of time-dependent reliability analysis by integrating the fast integration method and surrogate model method with time-dependent reliability analysis. A surrogate model is built first for the time-instantaneous conditional reliability index as a function of variables with imprecise parameters. For different realizations of the epistemic uncertainty, the associated time-instantaneous most probable points (MPPs) are then identified using the fast integration method based on the conditional reliability index surrogate without evaluating the original limit-state function. With the obtained time-instantaneous MPPs, uncertainty in the time-dependent reliability analysis is quantified. The effectiveness of the proposed method is demonstrated using a mathematical example and an engineering application example.


2015 ◽  
Vol 137 (5) ◽  
Author(s):  
Zhen Hu ◽  
Xiaoping Du

Time-dependent reliability analysis requires the use of the extreme value of a response. The extreme value function is usually highly nonlinear, and traditional reliability methods, such as the first order reliability method (FORM), may produce large errors. The solution to this problem is using a surrogate model of the extreme response. The objective of this work is to improve the efficiency of building such a surrogate model. A mixed efficient global optimization (m-EGO) method is proposed. Different from the current EGO method, which draws samples of random variables and time independently, the m-EGO method draws samples for the two types of samples simultaneously. The m-EGO method employs the adaptive Kriging–Monte Carlo simulation (AK–MCS) so that high accuracy is also achieved. Then, Monte Carlo simulation (MCS) is applied to calculate the time-dependent reliability based on the surrogate model. Good accuracy and efficiency of the m-EGO method are demonstrated by three examples.


2007 ◽  
Vol 353-358 ◽  
pp. 1001-1004 ◽  
Author(s):  
Shu Fang Song ◽  
Zhen Zhou Lu

For reliability analysis of implicit limit state function, an improved line sampling method is presented on the basis of sample simulation in failure region. In the presented method, Markov Chain is employed to simulate the samples located at failure region, and the important direction of line sampling is obtained from these simulated samples. Simultaneously, the simulated samples can be used as the samples for line sampling to evaluate the failure probability. Since the Markov Chain samples are recycled for both determination of the important direction and calculation of the failure probability, the computational cost of the line sampling is reduced greatly. The practical application in reliability analysis for low cycle fatigue life of an aeronautical engine turbine disc structure under 0-takeoff-0 cycle load shows that the presented method is rational and feasible.


Sign in / Sign up

Export Citation Format

Share Document