Computationally Efficient Imprecise Uncertainty Propagation

2013 ◽  
Vol 135 (5) ◽  
Author(s):  
Dipanjan D. Ghosh ◽  
Andrew Olewnik

Modeling uncertainty through probabilistic representation in engineering design is common and important to decision making that considers risk. However, representations of uncertainty often ignore elements of “imprecision” that may limit the robustness of decisions. Furthermore, current approaches that incorporate imprecision suffer from computational expense and relatively high solution error. This work presents a method that allows imprecision to be incorporated into design scenarios while providing computational efficiency and low solution error for uncertainty propagation. The work draws on an existing method for representing imprecision and integrates methods for sparse grid numerical integration, resulting in the computationally efficient imprecise uncertainty propagation (CEIUP) method. This paper presents details of the method and demonstrates the effectiveness on both numerical case studies, and a thermocouple performance problem found in the literature. Results for the numerical case studies, in most cases, demonstrate improvements in both computational efficiency and solution accuracy for varying problem dimension and variable interaction when compared to optimized parameter sampling (OPS). For the thermocouple problem, similar behavior is observed when compared to OPS. The paper concludes with an overview of design problem scenarios in which CEIUP is the preferred method and offers opportunities for extending the method.

Author(s):  
Dipanjan D. Ghosh ◽  
Andrew Olewnik

Modeling uncertainty through probabilistic representation in engineering design is common and important to decision making that considers risk. However, representations of uncertainty often ignore elements of “imprecision” that may limit the robustness of decisions. Further, current approaches that incorporate imprecision suffer from computational expense and relatively high solution error. This work presents the Computationally Efficient Imprecise Uncertainty Propagation (CEIUP) method which draws on existing approaches for propagation of imprecision and integrates sparse grid numerical integration to provide computational efficiency and low solution error for uncertainty propagation. The first part of the paper details the methodology and demonstrates improvements in both computational efficiency and solution accuracy as compared to the Optimized Parameter Sampling (OPS) approach for a set of numerical case studies. The second half of the paper is focused on estimation of non-dominated design parameter spaces using decision policies of Interval Dominance and Maximality Criterion in the context of set-based sequential design-decision making. A gear box design problem is presented and compared with OPS, demonstrating that CEIUP provides improved estimates of the non-dominated parameter range for satisfactory performance with faster solution times. Parameter estimates obtained for different risk attitudes are presented and analyzed from the perspective of Choice Theory leading to questions for future research. The paper concludes with an overview of design problem scenarios in which CEIUP is the preferred method and offers opportunities for extending the method.


2021 ◽  
Vol 14 (10) ◽  
pp. 6177-6195
Author(s):  
Paul R. Halloran ◽  
Jennifer K. McWhorter ◽  
Beatriz Arellano Nava ◽  
Robert Marsh ◽  
William Skirving

Abstract. The marine impacts of climate change on our societies will be largely felt through coastal waters and shelf seas. These impacts involve sectors as diverse as tourism, fisheries and energy production. Projections of future marine climate change come from global models. Modelling at the global scale is required to capture the feedbacks and large-scale transport of physical properties such as heat, which occur within the climate system, but global models currently cannot provide detail in the shelf seas. Version 2 of the regional implementation of the Shelf Sea Physics and Primary Production (S2P3-R v2.0) model bridges the gap between global projections and local shelf-sea impacts. S2P3-R v2.0 is a highly simplified coastal shelf model, computationally efficient enough to be run across the shelf seas of the whole globe. Despite the simplified nature of the model, it can display regional skill comparable to state-of-the-art models, and at the scale of the global (excluding high latitudes) shelf seas it can explain >50 % of the interannual sea surface temperature (SST) variability in ∼60 % of grid cells and >80 % of interannual variability in ∼20 % of grid cells. The model can be run at any resolution for which the input data can be supplied, without expert technical knowledge, and using a modest off-the-shelf computer. The accessibility of S2P3-R v2.0 places it within reach of an array of coastal managers and policy makers, allowing it to be run routinely once set up and evaluated for a region under expert guidance. The computational efficiency and relative scientific simplicity of the tool make it ideally suited to educational applications. S2P3-R v2.0 is set up to be driven directly with output from reanalysis products or daily atmospheric output from climate models such as those which contribute to the sixth phase of the Climate Model Intercomparison Project, making it a valuable tool for semi-dynamical downscaling of climate projections. The updates introduced into version 2.0 of this model are primarily focused around the ability to geographical relocate the model, model usability and speed but also scientific improvements. The value of this model comes from its computational efficiency, which necessitates simplicity. This simplicity leads to several limitations, which are discussed in the context of evaluation at regional and global scales.


2019 ◽  
Vol 26 (5) ◽  
pp. 1808-1814 ◽  
Author(s):  
Shengxiang Wang ◽  
Jianhong Liu ◽  
Yinghao Li ◽  
Jian Chen ◽  
Yong Guan ◽  
...  

Transmission X-ray microscopes (TXMs) have become one of the most powerful tools for imaging 3D structures of nano-scale samples using the computed tomography (CT) principle. As a major error source, sample jitter caused by mechanical instability of the rotation stage produces shifted 2D projections, from which reconstructed images contain severe motion artifacts. In this paper, a jitter correction algorithm is proposed, that has high accuracy and computational efficiency for TXM experiments with or without nano-particle markers. Geometric moments (GMs) are measured on segmented projections for each angle and fitted to sinusoidal curves in the angular direction. Sample jitter is estimated from the difference between the measured and the fitted GMs for image correction. On a digital phantom, the proposed method removes jitter errors at different noise levels. Physical experiments on chlorella cells show that the proposed GM method achieves better spatial resolution and higher computational efficiency than the re-projection method, a state-of-the-art algorithm using iterative correction. It even outperforms the approach of manual alignment, the current gold standard, on faithfully maintaining fine structures on the CT images. Our method is practically attractive in that it is computationally efficient and lowers experimental costs in current TXM studies without using expensive nano-particles markers.


Robotica ◽  
1994 ◽  
Vol 12 (4) ◽  
pp. 287-297 ◽  

SUMMARYPosition estimation is a key issue for an ALV Autonomous Land Vehicle) in navigating a mountainous area. The unevenness of the terrain makes mechanical velocity sensors inaccurate (due to wheel slippage), and the lack of appropriate landmarks complicates the problem. In this paper, we present a solution method using features of the skyline. The skyline from the vision system is assumed given, and compared with a computer map, called the CAD-MAP. The algorithm is composed of: a) Identification of the peak points in the camera skyline, b) Computing the ALV position for the identified peak points, and c) Searching for the corresponding peak point in the CAD-MAP. Heuristics for computational efficiency and solution accuracy are also included in the algorithm. To test the validity and effectiveness of the algorithm, numerous simulations were performed and analyzed.


2011 ◽  
Vol 133 (2) ◽  
Author(s):  
Kais Zaman ◽  
Mark McDonald ◽  
Sankaran Mahadevan

This paper develops and illustrates a probabilistic approach for uncertainty representation and propagation in system analysis, when the information on the uncertain input variables and/or their distribution parameters may be available as either probability distributions or simply intervals (single or multiple). A unique aggregation technique is used to combine multiple interval data and to compute rigorous bounds on the system response cumulative distribution function. The uncertainty described by interval data is represented through a flexible family of probability distributions. Conversion of interval data to a probabilistic format enables the use of computationally efficient methods for probabilistic uncertainty propagation. Two methods are explored for the implementation of the proposed approach, based on (1) sampling and (2) optimization. The sampling-based strategy is more expensive and tends to underestimate the output bounds. The optimization-based methodology improves both aspects. The proposed methods are used to develop new solutions to challenge problems posed by the Sandia epistemic uncertainty workshop (Oberkampf et al., 2004, “Challenge Problems: Uncertainty in System Response Given Uncertain Parameters,” Reliab. Eng. Syst. Saf., 85, pp. 11–19). Results for the challenge problems are compared with earlier solutions.


2020 ◽  
Vol 10 (14) ◽  
pp. 4698
Author(s):  
Xiang Peng ◽  
Qilong Gao ◽  
Jiquan Li ◽  
Zhenyu Liu ◽  
Bing Yi ◽  
...  

Many non-probabilistic approaches have been widely regarded as mathematical tools for the representation of epistemic uncertainties. However, their heavy computational burden and low computational efficiency hinder their applications in practical engineering problems. In this article, a unified probabilistic representation approach for multiple types of epistemic uncertainties is proposed based on the cubic normal transformation method. The epistemic uncertainties can be represented using an interval approach, triangular fuzzy approach, or evidence theory. The uncertain intervals of four statistical moments, which contain mean, variance, skewness, and kurtosis, are calculated using the sampling analysis method. Subsequently, the probabilistic cubic normal distribution functions are conducted for sampling points of four statistical moments of epistemic uncertainties. Finally, a calculation procedure for the construction of probabilistic representation functions is proposed, and these epistemic uncertainties are represented with belief and plausibility continuous probabilistic measure functions. Two numerical examples and one engineering example demonstrate that the proposed approach can act as an accurate probabilistic representation function with high computational efficiency.


Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4226 ◽  
Author(s):  
Rang Liu ◽  
Hongqi Fan ◽  
Tiancheng Li ◽  
Huaitie Xiao

A forward–backward labeled multi-Bernoulli (LMB) smoother is proposed for multi-target tracking. The proposed smoother consists of two components corresponding to forward LMB filtering and backward LMB smoothing, respectively. The former is the standard LMB filter and the latter is proved to be closed under LMB prior. It is also shown that the proposed LMB smoother can improve both the cardinality estimation and the state estimation, and the major computational complexity is linear with the number of targets. Implementation based on the Sequential Monte Carlo method in a representative scenario has demonstrated the effectiveness and computational efficiency of the proposed smoother in comparison to existing approaches.


2020 ◽  
Vol 55 ◽  
pp. S57-S68 ◽  
Author(s):  
I. Korsakissok ◽  
R. Périllat ◽  
S. Andronopoulos ◽  
P. Bedwell ◽  
E. Berge ◽  
...  

In the framework of the European project CONFIDENCE, Work Package 1 (WP1) focused on the uncertainties in the pre- and early phase of a radiological emergency, when environmental observations are not available and the assessment of the environmental and health impact of the accident largely relies on atmospheric dispersion modelling. The latter is subject to large uncertainties coming from, in particular, meteorological and release data. In WP1, several case studies were identified, including hypothetical accident scenarios in Europe and the Fukushima accident, for which participants propagated input uncertainties through their atmospheric dispersion and subsequent dose models. This resulted in several ensembles of results (consisting of tens to hundreds of simulations) that were compared to each other and to radiological observations (in the Fukushima case). These ensembles were analysed in order to answer questions such as: among meteorology, source term and model-related uncertainties, which are the predominant ones? Are uncertainty assessments very different between the participants and can this inter-ensemble variability be explained? What are the optimal ways of characterizing and presenting the uncertainties? Is the ensemble modelling sufficient to encompass the observations, or are there sources of uncertainty not (sufficiently) taken into account? This paper describes the case studies of WP1 and presents some illustrations of the results, with a summary of the main findings.


Sign in / Sign up

Export Citation Format

Share Document