scholarly journals Convex Relaxations for Global Optimization Under Uncertainty Described by Continuous Random Variables

2017 ◽  
Author(s):  
Yuanxun Shao ◽  
Joseph Kirk Scott

This article considers nonconvex global optimization problems subject to uncertainties described by continuous random variables. Such problems arise in chemical process design, renewable energy systems, stochastic model predictive control, etc. Here, we restrict our attention to problems with expected-value objectives and no recourse decisions. In principle, such problems can be solved globally using spatial branch-and-bound (B&B). However, B&B requires the ability to bound the optimal objective value on subintervals of the search space, and existing techniques are not generally applicable because expected-value objectives often cannot be written in closed-form. To address this, this article presents a new method for computing convex and concave relaxations of nonconvex expected-value functions, which can be used to obtain rigorous bounds for use in B&B. Furthermore, these relaxations obey a second-order pointwise convergence property, which is sufficient for finite termination of B&B under standard assumptions. Empirical results are shown for three simple examples.

2015 ◽  
Vol 24 (05) ◽  
pp. 1550017 ◽  
Author(s):  
Aderemi Oluyinka Adewumi ◽  
Akugbe Martins Arasomwan

This paper presents an improved particle swarm optimization (PSO) technique for global optimization. Many variants of the technique have been proposed in literature. However, two major things characterize many of these variants namely, static search space and velocity limits, which bound their flexibilities in obtaining optimal solutions for many optimization problems. Furthermore, the problem of premature convergence persists in many variants despite the introduction of additional parameters such as inertia weight and extra computation ability. This paper proposes an improved PSO algorithm without inertia weight. The proposed algorithm dynamically adjusts the search space and velocity limits for the swarm in each iteration by picking the highest and lowest values among all the dimensions of the particles, calculates their absolute values and then uses the higher of the two values to define a new search range and velocity limits for next iteration. The efficiency and performance of the proposed algorithm was shown using popular benchmark global optimization problems with low and high dimensions. Results obtained demonstrate better convergence speed and precision, stability, robustness with better global search ability when compared with six recent variants of the original algorithm.


2013 ◽  
Vol 2013 ◽  
pp. 1-12 ◽  
Author(s):  
Martins Akugbe Arasomwan ◽  
Aderemi Oluyinka Adewumi

Linear decreasing inertia weight (LDIW) strategy was introduced to improve on the performance of the original particle swarm optimization (PSO). However, linear decreasing inertia weight PSO (LDIW-PSO) algorithm is known to have the shortcoming of premature convergence in solving complex (multipeak) optimization problems due to lack of enough momentum for particles to do exploitation as the algorithm approaches its terminal point. Researchers have tried to address this shortcoming by modifying LDIW-PSO or proposing new PSO variants. Some of these variants have been claimed to outperform LDIW-PSO. The major goal of this paper is to experimentally establish the fact that LDIW-PSO is very much efficient if its parameters are properly set. First, an experiment was conducted to acquire a percentage value of the search space limits to compute the particle velocity limits in LDIW-PSO based on commonly used benchmark global optimization problems. Second, using the experimentally obtained values, five well-known benchmark optimization problems were used to show the outstanding performance of LDIW-PSO over some of its competitors which have in the past claimed superiority over it. Two other recent PSO variants with different inertia weight strategies were also compared with LDIW-PSO with the latter outperforming both in the simulation experiments conducted.


Author(s):  
Liqun Wang ◽  
Songqing Shan ◽  
G. Gary Wang

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.


2021 ◽  
Vol 12 (1) ◽  
pp. 157-184
Author(s):  
Wasqas Haider Bangyal ◽  
Jamil Ahmad ◽  
Hafiz Tayyab Rauf

Bat algorithm (BA) is a population-based stochastic search technique that has been widely used to solve the diverse kind of optimization problems. Population initialization is the current ongoing research problem in evolutionary computing algorithms. Appropriate population initialization assists the algorithm to investigate the swarm search space effectively. BA faces premature convergence problem to find actual global optimization value. Low discrepancy sequences are slightly lesser random number than pseudo-random; however, they are more powerful for computational approaches. In this work, new population initialization approach Halton (BA-HA), Sobol (BA-SO), and Torus (BA-TO) are proposed, which helps bats to avoid from the premature convergence. The proposed approaches are examined on standard benchmark functions, and simulation results are compared with standard BA initialized with uniform distribution. The results depict that substantial enhancement can be attained in the performance of standard BA while varying the random numbers sequences to low discrepancy sequences.


2016 ◽  
Vol 138 (11) ◽  
Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Design optimization under uncertainty is notoriously difficult when the objective function is expensive to evaluate. State-of-the-art techniques, e.g., stochastic optimization or sampling average approximation, fail to learn exploitable patterns from collected data and require a lot of objective function evaluations. There is a need for techniques that alleviate the high cost of information acquisition and select sequential simulations optimally. In the field of deterministic single-objective unconstrained global optimization, the Bayesian global optimization (BGO) approach has been relatively successful in addressing the information acquisition problem. BGO builds a probabilistic surrogate of the expensive objective function and uses it to define an information acquisition function (IAF) that quantifies the merit of making new objective evaluations. In this work, we reformulate the expected improvement (EI) IAF to filter out parametric and measurement uncertainties. We bypass the curse of dimensionality, since the method does not require learning the response surface as a function of the stochastic parameters, and we employ a fully Bayesian interpretation of Gaussian processes (GPs) by constructing a particle approximation of the posterior of its hyperparameters using adaptive Markov chain Monte Carlo (MCMC) to increase the methods robustness. Also, our approach quantifies the epistemic uncertainty on the location of the optimum and the optimal value as induced by the limited number of objective evaluations used in obtaining it. We verify and validate our approach by solving two synthetic optimization problems under uncertainty and demonstrate it by solving the oil-well placement problem (OWPP) with uncertainties in the permeability field and the oil price time series.


Geophysics ◽  
2008 ◽  
Vol 73 (5) ◽  
pp. R71-R82 ◽  
Author(s):  
Somanath Misra ◽  
Mauricio D. Sacchi

Linearized-inversion methods often have the disadvantage of dependence on the initial model. When the initial model is far from the global minimum, optimization is likely to converge to a local minimum. Optimization problems involving nonlinear relationships between data and model are likely to have more than one local minimum. Such problems are solved effectively by using global-optimization methods, which are exhaustive search techniques and hence are computationally expensive. As model dimensionality increases, the search space becomes large, making the algorithm very slow in convergence. We propose a new approach to the global-optimization scheme that incorporates a priori knowledge in the algorithm by preconditioning the model space using edge-preserving smoothing operators. Such nonlinear operators acting on the model space favorably precondition or bias the model space for blocky solutions. This approach not only speeds convergence but also retrieves blocky solutions. We apply the algorithm to estimate the layer parameters from the amplitude-variation-with-offset data. The results indicate that global optimization with model-space-preconditioning operators provides faster convergence and yields a more accurate blocky-model solution that is consistent with a priori information.


2021 ◽  
Vol 18 (6) ◽  
pp. 7076-7109
Author(s):  
Shuang Wang ◽  
◽  
Heming Jia ◽  
Qingxin Liu ◽  
Rong Zheng ◽  
...  

<abstract> <p>This paper introduces an improved hybrid Aquila Optimizer (AO) and Harris Hawks Optimization (HHO) algorithm, namely IHAOHHO, to enhance the searching performance for global optimization problems. In the IHAOHHO, valuable exploration and exploitation capabilities of AO and HHO are retained firstly, and then representative-based hunting (RH) and opposition-based learning (OBL) strategies are added in the exploration and exploitation phases to effectively improve the diversity of search space and local optima avoidance capability of the algorithm, respectively. To verify the optimization performance and the practicability, the proposed algorithm is comprehensively analyzed on standard and CEC2017 benchmark functions and three engineering design problems. The experimental results show that the proposed IHAOHHO has more superior global search performance and faster convergence speed compared to the basic AO and HHO and selected state-of-the-art meta-heuristic algorithms.</p> </abstract>


Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Design optimization under uncertainty is notoriously difficult when the objective function is expensive to evaluate. State-of-the-art techniques, e.g., stochastic optimization or sampling average approximation, fail to learn exploitable patterns from collected data and, as a result, they tend to require an excessive number of objective function evaluations. There is a need for techniques that alleviate the high cost of information acquisition and select sequential simulations in an optimal way. In the field of deterministic single-objective unconstrained global optimization, the Bayesian global optimization (BGO) approach has been relatively successful in addressing the information acquisition problem. BGO builds a probabilistic surrogate of the expensive objective function and uses it to define an information acquisition function (IAF) whose role is to quantify the merit of making new objective evaluations. Specifically, BGO iterates between making the observations with the largest expected IAF and rebuilding the probabilistic surrogate, until a convergence criterion is met. In this work, we extend the expected improvement (EI) IAF to the case of design optimization under uncertainty. This involves a reformulation of the EI policy that is able to filter out parametric and measurement uncertainties. We by-pass the curse of dimensionality, since the method does not require learning the response surface as a function of the stochastic parameters. To increase the robustness of our approach in the low sample regime, we employ a fully Bayesian interpretation of Gaussian processes by constructing a particle approximation of the posterior of its hyperparameters using adaptive Markov chain Monte Carlo. An addendum of our approach is that it can quantify the epistemic uncertainty on the location of the optimum and the optimal value as induced by the limited number of objective evaluations used in obtaining it. We verify and validate our approach by solving two synthetic optimization problems under uncertainty. We demonstrate our approach by solving a challenging engineering problem: the oil-well-placement problem with uncertainties in the permeability field and the oil price time series.


2019 ◽  
Vol 2019 ◽  
pp. 1-17
Author(s):  
Meiji Cui ◽  
Li Li ◽  
Miaojing Shi

Biogeography-based optimization (BBO), a recent proposed metaheuristic algorithm, has been successfully applied to many optimization problems due to its simplicity and efficiency. However, BBO is sensitive to the curse of dimensionality; its performance degrades rapidly as the dimensionality of the search space increases. In this paper, a selective migration operator is proposed to scale up the performance of BBO and we name it selective BBO (SBBO). The differential migration operator is selected heuristically to explore the global area as far as possible whist the normal distributed migration operator is chosen to exploit the local area. By the means of heuristic selection, an appropriate migration operator can be used to search the global optimum efficiently. Moreover, the strategy of cooperative coevolution (CC) is adopted to solve large-scale global optimization problems (LSOPs). To deal with subgroup imbalance contribution to the whole solution in the context of CC, a more efficient computing resource allocation is proposed. Extensive experiments are conducted on the CEC 2010 benchmark suite for large-scale global optimization, and the results show the effectiveness and efficiency of SBBO compared with BBO variants and other representative algorithms for LSOPs. Also, the results confirm that the proposed computing resource allocation is vital to the large-scale optimization within the limited computation budget.


Sign in / Sign up

Export Citation Format

Share Document