Adaptive Importance Latin Hypercube Sampling

Author(s):  
Brian K. Beachkofski ◽  
Ramana V. Grandhi

Probabilistic methods currently require many function evaluations or do not provide a mathematically robust confidence interval. The proposed method searches to find the Most Probable Point (MPP) using a Hasofer-Lind-Rackwitz-Fiessler (HLRF) algorithm, and then estimates reliability with Latin Hypercube Sampling (LHS) evaluating only those points outside of the MPP. Repeated samples provide several estimates of the reliability, which are aggregated to find a reliability estimate with a confidence interval. The computational efficiency is much better than standard LHS sampling and improves as the failure probability decreases. The method is applied to two example problems, each showing a statistically significant reduced confidence interval.

2019 ◽  
Vol 11 (3) ◽  
pp. 168781401982641 ◽  
Author(s):  
Wei Zhao ◽  
YangYang Chen ◽  
Jike Liu

In this article, a combined use of Latin hypercube sampling and axis orthogonal importance sampling, as an efficient and applicable tool for reliability analysis with limited number of samples, is explored for sensitivity estimation of the failure probability with respect to the distribution parameters of basic random variables, which is equivalently solved by reliability sensitivity analysis of a series of hyperplanes through each sampling point parallel to the tangent hyperplane of limit state surface around the design point. The analytical expressions of these hyperplanes are given, and the formulas for reliability sensitivity estimators and variances with the samples are derived according to the first-order reliability theory and difference method when non-normal random variables are involved and not involved, respectively. A procedure is established for the reliability sensitivity analysis with two versions: (1) axis orthogonal Latin hypercube importance sampling and (2) axis orthogonal quasi-random importance sampling with the Halton sequence. Four numerical examples are presented. The results are discussed and demonstrate that the proposed procedure is more efficient than the one based on the Latin hypercube sampling and the direct Monte Carlo technique with an acceptable accuracy in sensitivity estimation of the failure probability.


1990 ◽  
Vol 29 (03) ◽  
pp. 167-181 ◽  
Author(s):  
G. Hripcsak

AbstractA connectionist model for decision support was constructed out of several back-propagation modules. Manifestations serve as input to the model; they may be real-valued, and the confidence in their measurement may be specified. The model produces as its output the posterior probability of disease. The model was trained on 1,000 cases taken from a simulated underlying population with three conditionally independent manifestations. The first manifestation had a linear relationship between value and posterior probability of disease, the second had a stepped relationship, and the third was normally distributed. An independent test set of 30,000 cases showed that the model was better able to estimate the posterior probability of disease (the standard deviation of residuals was 0.046, with a 95% confidence interval of 0.046-0.047) than a model constructed using logistic regression (with a standard deviation of residuals of 0.062, with a 95% confidence interval of 0.062-0.063). The model fitted the normal and stepped manifestations better than the linear one. It accommodated intermediate levels of confidence well.


Energies ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 512
Author(s):  
Younhee Choi ◽  
Doosam Song ◽  
Sungmin Yoon ◽  
Junemo Koo

Interest in research analyzing and predicting energy loads and consumption in the early stages of building design using meta-models has constantly increased in recent years. Generally, it requires many simulated or measured results to build meta-models, which significantly affects their accuracy. In this study, Latin Hypercube Sampling (LHS) is proposed as an alternative to Fractional Factor Design (FFD), since it can improve the accuracy while including the nonlinear effect of design parameters with a smaller size of data. Building energy loads of an office floor with ten design parameters were selected as the meta-models’ objectives, and were developed using the two sampling methods. The accuracy of predicting the heating/cooling loads of the meta-models for alternative floor designs was compared. For the considered ranges of design parameters, window insulation (WDI) and Solar Heat Gain Coefficient (SHGC) were found to have nonlinear characteristics on cooling and heating loads. LHS showed better prediction accuracy compared to FFD, since LHS considers the nonlinear impacts for a given number of treatments. It is always a good idea to use LHS over FFD for a given number of treatments, since the existence of nonlinearity in the relation is not pre-existing information.


2011 ◽  
Vol 71-78 ◽  
pp. 1360-1365
Author(s):  
Jian Quan Ma ◽  
Guang Jie Li ◽  
Shi Bo Li ◽  
Pei Hua Xu

Take a typical cross-section of rockfill embankment slope in Yaan-Luku highway as the research object, reliability analysis is studied under the condition of water table of 840.85m, 851.50m, and loading condition of natural state and horizontal seismic acceleration of 0.2g, respectively. Raw data use Kolmogorov-Smirnov test (K-S test) to determine the distribution type of parametric variation. And the parameters were sampling with Latin hypercube sampling (LHS) method and Monte Carlo (MC) method, respectively, to obtain state function and determine safety factors and reliability indexes. A conclusion is drawn that the times of simulation based on LHS method were less than Monte Carlo method. Also the convergence of failure probability is better than the Monte Carlo method. The safety factor is greater than one and the failure probability has reached to 35.45% in condition of earthquake, which indicating that the instability of rockfill embankment slope is still possible.


Author(s):  
Efstratios Nikolaidis ◽  
Harley Cudney ◽  
Sophie Chen ◽  
Raphael T. Haftka ◽  
Raluca Rosca

Abstract This paper compares probabilistic and possibility-based methods for design against catastrophic failure under uncertainty. It studies the effect of the amount of information on the effectiveness of each method. The study is confined to problems where the boundary between survival and failure is sharp. First, the paper examines the theoretical foundations of probability and possibility. It also compares the two methods when they are used to assess the risk of a system. Finally, it compares the two methods on two design problems. A major difference between probability and possibility is in the axioms about the union of events. Because of this difference, probability and possibility calculi are fundamentally different and one cannot simulate possibility calculus using probabilistic models. It is shown that possibility-based methods can be less conservative than probability-based methods in systems with many failure modes. On the other hand, possibility-based methods tend to be more conservative than probability-based methods in systems that fail only if many unfavorable events occur simultaneously. Probabilistic methods are better than possibility-based methods if sufficient information is available. However, the latter can be better if little information is available. A principal reason is that it is easier to identify the most conservative possibilistic model than the most conservative probabilistic model that is consistent with the available information.


2019 ◽  
pp. 29-44
Author(s):  
Guojun Gan ◽  
Emiliano A. Valdez

Sign in / Sign up

Export Citation Format

Share Document