Robust Optimization With Mixed Interval and Probabilistic Parameter Uncertainties, Model Uncertainty, and Metamodeling Uncertainty

Author(s):  
Yanjun Zhang ◽  
Tingting Xia ◽  
Mian Li

Abstract Various types of uncertainties, such as parameter uncertainty, model uncertainty, metamodeling uncertainty may lead to low robustness. Parameter uncertainty can be either epistemic or aleatory in physical systems, which have been widely represented by intervals and probability distributions respectively. Model uncertainty is formally defined as the difference between the true value of the real-world process and the code output of the simulation model at the same value of inputs. Additionally, metamodeling uncertainty is introduced due to the usage of metamodels. To reduce the effects of uncertainties, robust optimization (RO) algorithms have been developed to obtain solutions being not only optimal but also less sensitive to uncertainties. Based on how parameter uncertainty is modeled, there are two categories of RO approaches: interval-based and probability-based. In real-world engineering problems, both interval and probabilistic parameter uncertainties are likely to exist simultaneously in a single problem. However, few works have considered mixed interval and probabilistic parameter uncertainties together with other types of uncertainties. In this work, a general RO framework is proposed to deal with mixed interval and probabilistic parameter uncertainties, model uncertainty, and metamodeling uncertainty simultaneously in design optimization problems using the intervals-of-statistics approaches. The consideration of multiple types of uncertainties will improve the robustness of optimal designs and reduce the risk of inappropriate decision-making, low robustness and low reliability in engineering design. Two test examples are utilized to demonstrate the applicability and effectiveness of the proposed RO approach.

Author(s):  
Yanjun Zhang ◽  
Mian Li

Uncertainty is inevitable in engineering design. The existence of uncertainty may change the optimality and/or the feasibility of the obtained optimal solutions. In simulation-based engineering design, uncertainty could have various types of sources, such as parameter uncertainty, model uncertainty, and other random errors. To deal with uncertainty, robust optimization (RO) algorithms are developed to find solutions which are not only optimal but also robust with respect to uncertainty. Parameter uncertainty has been taken care of by various RO approaches. While model uncertainty has been ignored in majority of existing RO algorithms with the hypothesis that the simulation model used could represent the real physical system perfectly. In the authors’ earlier work, a RO framework was proposed to consider both parameter and model uncertainties using the Bayesian approach with Gaussian processes (GP), where metamodeling uncertainty introduced by GP modeling is ignored by assuming the constructed GP model is accurate enough with sufficient training samples. However, infinite samples are impossible for real applications due to prohibitive time and/or computational cost. In this work, a new RO framework is proposed to deal with both parameter and model uncertainties using GP models but only with limited samples. The compound effect of parameter, model, and metamodeling uncertainties is derived with the form of the compound mean and variance to formulate the proposed RO approach. The proposed RO approach will reduce the risk for the obtained robust optimal designs considering parameter and model uncertainties becoming non-optimal and/or infeasible due to insufficiency of samples for GP modeling. Two test examples with different degrees of complexity are utilized to demonstrate the applicability and effectiveness of the proposed approach.


2017 ◽  
Vol 48 (2) ◽  
pp. 400-430 ◽  
Author(s):  
Adam Slez

Young and Holsteen (YH) introduce a number of tools for evaluating model uncertainty. In so doing, they are careful to differentiate their method from existing forms of model averaging. The fundamental difference lies in the way in which the underlying estimates are weighted. Whereas standard approaches to model averaging assign higher weight to better fitting models, the YH method weights all models equally. As I show, this is a nontrivial distinction, in that the two sets of procedures tend to produce radically different results. Drawing on both simulation and real-world examples, I demonstrate that in failing to distinguish between numerical variation and statistical uncertainty, the procedure proposed by YH will tend to overstate the amount of uncertainty resulting from variation across models. In standard circumstances, the quality of estimates produced using this method will tend to be objectively worse than that of conventional alternatives.


Author(s):  
Abiola M. Ajetunmobi ◽  
Cameron J. Turner ◽  
Richard H. Crawford

Engineering systems are generally susceptible to parameter uncertainties that influence real-time system performance and long-term system reliability. However, designers and engineers must design system solutions that are both optimal and dependable. Robust design techniques and robust optimization methods in particular, have emerged as promising methodologies to address the problem of dealing with parameter uncertainties. This research advances a robust optimization approach that exploits gradient information embedded in proximate NURBs control point clusters that are inherent in NURBs metamodel design space representations. The proximate control point clusters embody the target sensitivity profile and therefore include robust optimal solutions, thus enabling selective optimization within regions associated with the clusters. This robust optimization framework has been implemented and is demonstrated on unconstrained robust optimization problems from two test functions and a constrained robust optimization problem from a practical engineering design problem.


Author(s):  
Xiangzhong Xie ◽  
René Schenkendorf ◽  
Ulrike Krewer

Model-based design has received considerable attention in biological and chemical industries over the last two decades. However, the parameter uncertainties of first-principle models are critical in model-based design and have led to the development of robustification concepts. Various strategies were introduced to solve the robust optimization problem. Most approaches suffer from either unreasonable computational expense or low approximation accuracy. Moreover, they are not rigorous and do not consider robust optimization problems where parameter correlation and equality constraints exist. In this work, we propose a highly efficient framework for solving robust optimization problems with the so-called point estimation method (PEM). The PEM has a fair trade-off between computation expense and approximation accuracy and can be easily extended to problems of parameter correlations. From a statistical point of view, moment-based methods are used to approximate robust inequality and equality constraints. We also suggest employing the information from global sensitivity analysis to further simplify robust optimization problems with a large number of uncertain parameters. We demonstrate the performance of the proposed framework with two case studies where one is to design a heating/cooling profile for the essential part of a continuous production process and the other is to optimize the feeding profile for a fed-batch reactor of the penicillin fermentation process. The results reveal that the proposed approach can be used successfully for complex (bio)chemical problems in model-based design.


2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Ru-Ru Jia ◽  
Xue-Jie Bai

Robust optimization is a powerful and relatively novel methodology to cope with optimization problems in the presence of uncertainty. The positive aspect of robust optimization approach is its computational tractability that attracts more and more attention. In this paper, we focus on an ambiguous P-model where probability distributions are partially known. We discuss robust counterpart (RC) of uncertain linear constraints under two refined uncertain sets by robust approach and further find the safe tractable approximations of chance constraints in the ambiguous P-model. Because of the probability constraints embedded in the ambiguous P-model, it is computationally intractable. The advantage of our approach lies in choosing an implicit way to treat stochastic uncertainty models instead of solving them directly. The process above can enable the transformation of proposed P-model to a tractable deterministic one under the refined uncertainty sets. A numerical example about portfolio selection demonstrates that the ambiguous P-model can help the decision maker to determine the optimal investment proportions of various stocks. Sensitivity analyses explore the trade-off between optimization and robustness by adjusting parameter values. Comparison study is conducted to validate the benefit of our ambiguous P-model.


Author(s):  
L. Eça ◽  
K. Dowding ◽  
P. J. Roache

Abstract The goal of this paper is to summarize and clarify the scope and interpretation of the validation procedure presented in the V&V20-2009 ASME Standard. In V&V20-2009, validation is an assessment of the model error, without regard to the assessment satisfying validation requirements. Therefore, validation is not considered as a pass/fail exercise. The purpose of the validation procedure is the estimation of the accuracy of a mathematical model for specified validation variables (also known as quantities of interest, system responses or figures of merit) at a specified validation point for cases in which the conditions of the actual experiment are simulated. The proposed procedure can be applied to variables defined by a scalar. For the sake of clarity, the paper reiterates the development and assumptions behind the V&V20-2009 procedure that requires the knowledge of the experimental values D and simulation values S at the set point and an estimate of the experimental, numerical and parameter uncertainties. The difference E between S and D is the centre of the interval that should contain the model error (with a certain degree of confidence) and the width of the interval is obtained from the validation uncertainty that is a consequence of the combination of the experimental, numerical and parameter uncertainties. The paper presents the alternatives to address parameter uncertainty and expands upon the interpretation of the final result. The paper also includes two examples demonstrating the application of the V&V20-2009 validation procedure including one problem from V&V10.1-2012 on solid mechanics.


2021 ◽  
Vol 52 (1) ◽  
pp. 12-15
Author(s):  
S.V. Nagaraj

This book is on algorithms for network flows. Network flow problems are optimization problems where given a flow network, the aim is to construct a flow that respects the capacity constraints of the edges of the network, so that incoming flow equals the outgoing flow for all vertices of the network except designated vertices known as the source and the sink. Network flow algorithms solve many real-world problems. This book is intended to serve graduate students and as a reference. The book is also available in eBook (ISBN 9781316952894/US$ 32.00), and hardback (ISBN 9781107185890/US$99.99) formats. The book has a companion web site www.networkflowalgs.com where a pre-publication version of the book can be downloaded gratis.


2021 ◽  
pp. 193229682110075
Author(s):  
Rebecca A. Harvey Towers ◽  
Xiaohe Zhang ◽  
Rasoul Yousefi ◽  
Ghazaleh Esmaili ◽  
Liang Wang ◽  
...  

The algorithm for the Dexcom G6 CGM System was enhanced to retain accuracy while reducing the frequency and duration of sensor error. The new algorithm was evaluated by post-processing raw signals collected from G6 pivotal trials (NCT02880267) and by assessing the difference in data availability after a limited, real-world launch. Accuracy was comparable with the new algorithm—the overall %20/20 was 91.7% before and 91.8% after the algorithm modification; MARD was unchanged. The mean data gap due to sensor error nearly halved and total time spent in sensor error decreased by 59%. A limited field launch showed similar results, with a 43% decrease in total time spent in sensor error. Increased data availability may improve patient experience and CGM data integration into insulin delivery systems.


Author(s):  
Stergios Athanasoglou ◽  
Valentina Bosetti ◽  
Laurent Drouet

AbstractWe propose a novel framework for the economic assessment of environmental policy. Our main point of departure from existing work is the adoption of a satisficing, as opposed to optimizing, modeling approach. Along these lines, we place primary emphasis on the extent to which different policies meet a set of goals at a specific future date instead of their performance vis-a-vis some intertemporal objective function. Consistent to the nature of environmental policymaking, our model takes explicit account of model uncertainty. To this end, the decision criterion we propose is an analog of the well-known success-probability criterion adapted to settings characterized by model uncertainty. We apply our criterion to the climate-change context and the probability distributions constructed by Drouet et al. (2015) linking carbon budgets to future consumption. Insights from computational geometry facilitate computations considerably and allow for the efficient application of the model in high-dimensional settings.


Sign in / Sign up

Export Citation Format

Share Document