Reliability Analysis in the Presence of Aleatory and Epistemic Uncertainties, Application to the Prediction of a Launch Vehicle Fallout Zone

2016 ◽  
Vol 138 (11) ◽  
Author(s):  
Loïc Brevault ◽  
Sylvain Lacaze ◽  
Mathieu Balesdent ◽  
Samy Missoum

The design of complex systems often requires reliability assessments involving a large number of uncertainties and low probability of failure estimations (in the order of 10−4). Estimating such rare event probabilities with crude Monte Carlo (CMC) is computationally intractable. Specific numerical methods to reduce the computational cost and the variance estimate have been developed such as importance sampling or subset simulation. However, these methods assume that the uncertainties are defined within the probability formalism. Regarding epistemic uncertainties, the interval formalism is particularly adapted when only their definition domain is known. In this paper, a method is derived to assess the reliability of a system with uncertainties described by both probability and interval frameworks. It allows one to determine the bounds of the failure probability and involves a sequential approach using subset simulation, kriging, and an optimization process. To reduce the simulation cost, a refinement strategy of the surrogate model is proposed taking into account the presence of both aleatory and epistemic uncertainties. The method is compared to existing approaches on an analytical example as well as on a launch vehicle fallout zone estimation problem.

Processes ◽  
2019 ◽  
Vol 7 (4) ◽  
pp. 185 ◽  
Author(s):  
Patrick Piprek ◽  
Sébastien Gros ◽  
Florian Holzapfel

This study develops a ccoc framework capable of handling rare event probabilities. Therefore, the framework uses the gpc method to calculate the probability of fulfilling rare event constraints under uncertainties. Here, the resulting cc evaluation is based on the efficient sampling provided by the gpc expansion. The subsim method is used to estimate the actual probability of the rare event. Additionally, the discontinuous cc is approximated by a differentiable function that is iteratively sharpened using a homotopy strategy. Furthermore, the subsim problem is also iteratively adapted using another homotopy strategy to improve the convergence of the Newton-type optimization algorithm. The applicability of the framework is shown in case studies regarding battery charging and discharging. The results show that the proposed method is indeed capable of incorporating very general cc within an ocp at a low computational cost to calculate optimal results with rare failure probability cc.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-52
Author(s):  
Lorenzo De Stefani ◽  
Erisa Terolli ◽  
Eli Upfal

We introduce Tiered Sampling , a novel technique for estimating the count of sparse motifs in massive graphs whose edges are observed in a stream. Our technique requires only a single pass on the data and uses a memory of fixed size M , which can be magnitudes smaller than the number of edges. Our methods address the challenging task of counting sparse motifs—sub-graph patterns—that have a low probability of appearing in a sample of M edges in the graph, which is the maximum amount of data available to the algorithms in each step. To obtain an unbiased and low variance estimate of the count, we partition the available memory into tiers (layers) of reservoir samples. While the base layer is a standard reservoir sample of edges, other layers are reservoir samples of sub-structures of the desired motif. By storing more frequent sub-structures of the motif, we increase the probability of detecting an occurrence of the sparse motif we are counting, thus decreasing the variance and error of the estimate. While we focus on the designing and analysis of algorithms for counting 4-cliques, we present a method which allows generalizing Tiered Sampling to obtain high-quality estimates for the number of occurrence of any sub-graph of interest, while reducing the analysis effort due to specific properties of the pattern of interest. We present a complete analytical analysis and extensive experimental evaluation of our proposed method using both synthetic and real-world data. Our results demonstrate the advantage of our method in obtaining high-quality approximations for the number of 4 and 5-cliques for large graphs using a very limited amount of memory, significantly outperforming the single edge sample approach for counting sparse motifs in large scale graphs.


2019 ◽  
Vol 9 (10) ◽  
pp. 1972 ◽  
Author(s):  
Elzbieta Gawronska

Progress in computational methods has been stimulated by the widespread availability of cheap computational power leading to the improved precision and efficiency of simulation software. Simulation tools become indispensable tools for engineers who are interested in attacking increasingly larger problems or are interested in searching larger phase space of process and system variables to find the optimal design. In this paper, we show and introduce a new approach to a computational method that involves mixed time stepping scheme and allows to decrease computational cost. Implementation of our algorithm does not require a parallel computing environment. Our strategy splits domains of a dynamically changing physical phenomena and allows to adjust the numerical model to various sub-domains. We are the first (to our best knowledge) to show that it is possible to use a mixed time partitioning method with various combination of schemes during binary alloys solidification. In particular, we use a fixed time step in one domain, and look for much larger time steps in other domains, while maintaining high accuracy. Our method is independent of a number of domains considered, comparing to traditional methods where only two domains were considered. Mixed time partitioning methods are of high importance here, because of natural separation of domain types. Typically all important physical phenomena occur in the casting and are of high computational cost, while in the mold domains less dynamic processes are observed and consequently larger time step can be chosen. Finally, we performed series of numerical experiments and demonstrate that our approach allows reducing computational time by more than three times without losing the significant precision of results and without parallel computing.


Author(s):  
Tong Zou ◽  
Sankaran Mahadevan ◽  
Akhil Sopory

A novel reliability-based design optimization (RBDO) method using simulation-based techniques for reliability assessments and efficient optimization approach is presented in this paper. In RBDO, model-based reliability analysis needs to be performed to calculate the probability of not satisfying a reliability constraint and the gradient of this probability with respect to each design variable. Among model-based methods, the most widely used in RBDO is the first-order reliability method (FORM). However, FORM could be inaccurate for nonlinear problems and is not applicable for system reliability problems. This paper develops an efficient optimization methodology to perform RBDO using simulation-based techniques. By combining analytical and simulation-based reliability methods, accurate probability of failure and sensitivity information is obtained. The use of simulation also enables both component and system-level reliabilities to be included in RBDO formulation. Instead of using a traditional RBDO formulation in which optimization and reliability computations are nested, a sequential approach is developed to greatly reduce the computational cost. The efficiency of the proposed RBDO approach is enhanced by using a multi-modal adaptive importance sampling technique for simulation-based reliability assessment; and by treating the inactive reliability constraints properly in optimization. A vehicle side impact problem is used to demonstrate the capabilities of the proposed method.


Author(s):  
Yanwen Xu ◽  
Pingfeng Wang

Abstract Analysis of rare failure events accurately is often challenging with an affordable computational cost in many engineering applications, and this is especially true for problems with high dimensional system inputs. The extremely low probabilities of occurrences for those rare events often lead to large probability estimation errors and low computational efficiency. Thus, it is vital to develop advanced probability analysis methods that are capable of providing robust estimations of rare event probabilities with narrow confidence bounds. Generally, confidence intervals of an estimator can be established based on the central limit theorem, but one of the critical obstacles is the low computational efficiency, since the widely used Monte Carlo method often requires a large number of simulation samples to derive a reasonably narrow confidence interval. This paper develops a new probability analysis approach that can be used to derive the estimates of rare event probabilities efficiently with narrow estimation bounds simultaneously for high dimensional problems. The asymptotic behaviors of the developed estimator has also been proved theoretically without imposing strong assumptions. Further, an asymptotic confidence interval is established for the developed estimator. The presented study offers important insights into the robust estimations of the probability of occurrences for rare events. The accuracy and computational efficiency of the developed technique is assessed with numerical and engineering case studies. Case study results have demonstrated that narrow bounds can be built efficiently using the developed approach, and the true values have always been located within the estimation bounds, indicating that good estimation accuracy along with a significantly improved efficiency.


2019 ◽  
Vol 5 (8) ◽  
pp. 1684-1697
Author(s):  
Hawraa Qasim Jebur ◽  
Salah Rohaima Al-Zaidee

In recent years, more researches on structural reliability theory and methods have been carried out. In this study, a portal steel frame is considered. The reliability analysis for the frame is represented by the probability of failure, P_f, and the reliability index, β, that can be predicted based on the failure of the girders and columns. The probability of failure can be estimated dependent on the probability density function of two random variables, namely Capacity R, and Demand Q. The Monte Carlo simulation approach has been employed to consider the uncertainty the parameters of R, and Q. Matlab functions have been adopted to generate pseudo-random number for considered parameters. Although the Monte Carlo method is active and is widely used in reliability research, it has a disadvantage which represented by the requirement of large sample sizes to estimate the small probabilities of failure. This is leading to computational cost and time. Therefore, an Approximated Monte Carlo simulation method has been adopted for this issue. In this study, four performances have been considered include the serviceability deflection limit state, ultimate limit state for girder, ultimate limit state for the columns, and elastic stability. As the portal frame is a statically indeterminate structure, therefore bending moments, and axial forces cannot be determined based on static alone. A finite element parametric model has been prepared using Abaqus to deal with this aspect. The statistical analysis for the results samples show that all response data have lognormal distribution except of elastic critical buckling load which has a normal distribution.


Author(s):  
Pei Cao ◽  
Zhaoyan Fan ◽  
Robert Gao ◽  
Jiong Tang

In engineering design, the volume and weight of a number of systems consisting of valves and plumbing lines often need to be minimized. In current practice, this is facilitated under empirical experience with trial and error, which is time-consuming and may not yield the optimal result. This problem is intrinsically difficult due to the challenge in the formulation of optimization problem that has to be computationally tractable. In this research, we choose a sequential approach towards the design optimization, i.e., first optimizing the placement of valves under prescribed constraints to minimize the volume occupied, and then identifying the shortest paths of plumbing lines to connect the valves. In the first part, the constraints are described by analytical expressions, and two approaches of valve placement optimization are reported, i.e., a two-phase method and a simulated annealing-based method. In the second part, a three-dimensional routing algorithm is explored to connect the valves. Our case study indicates that the design can indeed be automated and design optimization can be achieved under reasonable computational cost. The outcome of this research can benefit both existing manufacturing practice and future additive manufacturing.


Author(s):  
Daryl Bandstra ◽  
Alex M. Fraser

Abstract One of the leading threats to the integrity of oil and gas transmission pipeline systems is metal-loss corrosion. This threat is commonly managed by evaluating measurements obtained with in-line inspection tools, which locate and size individual metal-loss defects in order to plan maintenance and repair activities. Both deterministic and probabilistic methods are used in the pipeline industry to evaluate the severity of these defects. Probabilistic evaluations typically utilize structural reliability, which is an approach to designing and assessing structures that focuses on the calculation and prediction of the probability that a structure may fail. In the structural reliability approach, the probability of failure is obtained from a multidimensional integral. The solution to this integral is typically estimated numerically using Direct Monte Carlo (DMC) simulation as DMC is relatively simple and robust. The downside is that DMC requires a significant amount of computational effort to estimate small probabilities. The objective of this paper is to explore the use of a more efficient approach, called Subset Simulation (SS), to estimate the probability of burst failure for a pipeline metal-loss defect. We present comparisons between the probability of failure estimates generated for a sample defect by Direct Monte Carlo simulation and Subset Simulation for differing numbers of simulations. These cases illustrate the decreased computational effort required by Subset Simulation to produce stable probability of failure estimates, particularly for small probabilities. For defects with a burst probability in the range of 10−4 to 10−7, SS is shown to reduce the computational effort (time or cost) by 10 to 1,000 times. By significantly reducing the computational effort required to obtain stable estimates of small failure probabilities, this methodology reduces one of the major barriers to the use of reliability methods for system-wide pipeline reliability assessment.


Sign in / Sign up

Export Citation Format

Share Document