scholarly journals Global optimization based on active preference learning with radial basis functions

2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.

2020 ◽  
Vol 77 (2) ◽  
pp. 571-595 ◽  
Author(s):  
Alberto Bemporad

Abstract Global optimization problems whose objective function is expensive to evaluate can be solved effectively by recursively fitting a surrogate function to function samples and minimizing an acquisition function to generate new samples. The acquisition step trades off between seeking for a new optimization vector where the surrogate is minimum (exploitation of the surrogate) and looking for regions of the feasible space that have not yet been visited and that may potentially contain better values of the objective function (exploration of the feasible space). This paper proposes a new global optimization algorithm that uses inverse distance weighting (IDW) and radial basis functions (RBF) to construct the acquisition function. Rather arbitrary constraints that are simple to evaluate can be easily taken into account. Compared to Bayesian optimization, the proposed algorithm, that we call GLIS (GLobal minimum using Inverse distance weighting and Surrogate radial basis functions), is competitive and computationally lighter, as we show in a set of benchmark global optimization and hyperparameter tuning problems. MATLAB and Python implementations of GLIS are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2018 ◽  
Vol 8 (10) ◽  
pp. 1945 ◽  
Author(s):  
Tarik Eltaeib ◽  
Ausif Mahmood

Differential evolution (DE) has been extensively used in optimization studies since its development in 1995 because of its reputation as an effective global optimizer. DE is a population-based metaheuristic technique that develops numerical vectors to solve optimization problems. DE strategies have a significant impact on DE performance and play a vital role in achieving stochastic global optimization. However, DE is highly dependent on the control parameters involved. In practice, the fine-tuning of these parameters is not always easy. Here, we discuss the improvements and developments that have been made to DE algorithms. In particular, we present a state-of-the-art survey of the literature on DE and its recent advances, such as the development of adaptive, self-adaptive and hybrid techniques.


2011 ◽  
Vol 08 (03) ◽  
pp. 535-544 ◽  
Author(s):  
BOUDJEHEM DJALIL ◽  
BOUDJEHEM BADREDDINE ◽  
BOUKAACHE ABDENOUR

In this paper, we propose a very interesting idea in global optimization making it easer and a low-cost task. The main idea is to reduce the dimension of the optimization problem in hand to a mono-dimensional one using variables coding. At this level, the algorithm will look for the global optimum of a mono-dimensional cost function. The new algorithm has the ability to avoid local optima, reduces the number of evaluations, and improves the speed of the algorithm convergence. This method is suitable for functions that have many extremes. Our algorithm can determine a narrow space around the global optimum in very restricted time based on a stochastic tests and an adaptive partition of the search space. Illustrative examples are presented to show the efficiency of the proposed idea. It was found that the algorithm was able to locate the global optimum even though the objective function has a large number of optima.


Author(s):  
J. Gu ◽  
G. Y. Li ◽  
Z. Dong

Metamodeling techniques are increasingly used in solving computation intensive design optimization problems today. In this work, the issue of automatic identification of appropriate metamodeling techniques in global optimization is addressed. A generic, new hybrid metamodel based global optimization method, particularly suitable for design problems involving computation intensive, black-box analyses and simulations, is introduced. The method employs three representative metamodels concurrently in the search process and selects sample data points adaptively according to the values calculated using the three metamodels to improve the accuracy of modeling. The global optimum is identified when the metamodels become reasonably accurate. The new method is tested using various benchmark global optimization problems and applied to a real industrial design optimization problem involving vehicle crash simulation, to demonstrate the superior performance of the new algorithm over existing search methods. Present limitations of the proposed method are also discussed.


Author(s):  
Liqun Wang ◽  
Songqing Shan ◽  
G. Gary Wang

The presence of black-box functions in engineering design, which are usually computation-intensive, demands efficient global optimization methods. This work proposes a new global optimization method for black-box functions. The global optimization method is based on a novel mode-pursuing sampling (MPS) method which systematically generates more sample points in the neighborhood of the function mode while statistically covers the entire search space. Quadratic regression is performed to detect the region containing the global optimum. The sampling and detection process iterates until the global optimum is obtained. Through intensive testing, this method is found to be effective, efficient, robust, and applicable to both continuous and discontinuous functions. It supports simultaneous computation and applies to both unconstrained and constrained optimization problems. Because it does not call any existing global optimization tool, it can be used as a standalone global optimization method for inexpensive problems as well. Limitation of the method is also identified and discussed.


2016 ◽  
Vol 138 (11) ◽  
Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Design optimization under uncertainty is notoriously difficult when the objective function is expensive to evaluate. State-of-the-art techniques, e.g., stochastic optimization or sampling average approximation, fail to learn exploitable patterns from collected data and require a lot of objective function evaluations. There is a need for techniques that alleviate the high cost of information acquisition and select sequential simulations optimally. In the field of deterministic single-objective unconstrained global optimization, the Bayesian global optimization (BGO) approach has been relatively successful in addressing the information acquisition problem. BGO builds a probabilistic surrogate of the expensive objective function and uses it to define an information acquisition function (IAF) that quantifies the merit of making new objective evaluations. In this work, we reformulate the expected improvement (EI) IAF to filter out parametric and measurement uncertainties. We bypass the curse of dimensionality, since the method does not require learning the response surface as a function of the stochastic parameters, and we employ a fully Bayesian interpretation of Gaussian processes (GPs) by constructing a particle approximation of the posterior of its hyperparameters using adaptive Markov chain Monte Carlo (MCMC) to increase the methods robustness. Also, our approach quantifies the epistemic uncertainty on the location of the optimum and the optimal value as induced by the limited number of objective evaluations used in obtaining it. We verify and validate our approach by solving two synthetic optimization problems under uncertainty and demonstrate it by solving the oil-well placement problem (OWPP) with uncertainties in the permeability field and the oil price time series.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Wei Shao ◽  
Guangbao Guo

Simulated annealing is a widely used algorithm for the computation of global optimization problems in computational chemistry and industrial engineering. However, global optimum values cannot always be reached by simulated annealing without a logarithmic cooling schedule. In this study, we propose a new stochastic optimization algorithm, i.e., simulated annealing based on the multiple-try Metropolis method, which combines simulated annealing and the multiple-try Metropolis algorithm. The proposed algorithm functions with a rapidly decreasing schedule, while guaranteeing global optimum values. Simulated and real data experiments including a mixture normal model and nonlinear Bayesian model indicate that the proposed algorithm can significantly outperform other approximated algorithms, including simulated annealing and the quasi-Newton method.


2019 ◽  
Vol 23 (1) ◽  
pp. 351-369 ◽  
Author(s):  
Guillaume Pirot ◽  
Tipaluck Krityakierne ◽  
David Ginsbourger ◽  
Philippe Renard

Abstract. Contaminant source localization problems require efficient and robust methods that can account for geological heterogeneities and accommodate relatively small data sets of noisy observations. As realism commands hi-fidelity simulations, computation costs call for global optimization algorithms under parsimonious evaluation budgets. Bayesian optimization approaches are well adapted to such settings as they allow the exploration of parameter spaces in a principled way so as to iteratively locate the point(s) of global optimum while maintaining an approximation of the objective function with an instrumental quantification of prediction uncertainty. Here, we adapt a Bayesian optimization approach to localize a contaminant source in a discretized spatial domain. We thus demonstrate the potential of such a method for hydrogeological applications and also provide test cases for the optimization community. The localization problem is illustrated for cases where the geology is assumed to be perfectly known. Two 2-D synthetic cases that display sharp hydraulic conductivity contrasts and specific connectivity patterns are investigated. These cases generate highly nonlinear objective functions that present multiple local minima. A derivative-free global optimization algorithm relying on a Gaussian process model and on the expected improvement criterion is used to efficiently localize the point of minimum of the objective functions, which corresponds to the contaminant source location. Even though concentration measurements contain a significant level of proportional noise, the algorithm efficiently localizes the contaminant source location. The variations of the objective function are essentially driven by the geology, followed by the design of the monitoring well network. The data and scripts used to generate objective functions are shared to favor reproducible research. This contribution is important because the functions present multiple local minima and are inspired from a practical field application. Sharing these complex objective functions provides a source of test cases for global optimization benchmarks and should help with designing new and efficient methods to solve this type of problem.


2012 ◽  
Vol 2012 ◽  
pp. 1-36 ◽  
Author(s):  
Jui-Yu Wu

This work presents a hybrid real-coded genetic algorithm with a particle swarm optimization (RGA-PSO) algorithm and a hybrid artificial immune algorithm with a PSO (AIA-PSO) algorithm for solving 13 constrained global optimization (CGO) problems, including six nonlinear programming and seven generalized polynomial programming optimization problems. External RGA and AIA approaches are used to optimize the constriction coefficient, cognitive parameter, social parameter, penalty parameter, and mutation probability of an internal PSO algorithm. CGO problems are then solved using the internal PSO algorithm. The performances of the proposed RGA-PSO and AIA-PSO algorithms are evaluated using 13 CGO problems. Moreover, numerical results obtained using the proposed RGA-PSO and AIA-PSO algorithms are compared with those obtained using published individual GA and AIA approaches. Experimental results indicate that the proposed RGA-PSO and AIA-PSO algorithms converge to a global optimum solution to a CGO problem. Furthermore, the optimum parameter settings of the internal PSO algorithm can be obtained using the external RGA and AIA approaches. Also, the proposed RGA-PSO and AIA-PSO algorithms outperform some published individual GA and AIA approaches. Therefore, the proposed RGA-PSO and AIA-PSO algorithms are highly promising stochastic global optimization methods for solving CGO problems.


2010 ◽  
Vol 132 (6) ◽  
Author(s):  
Yen-Chih Huang ◽  
Kuei-Yuan Chan

Design optimization problems under random uncertainties are commonly formulated with constraints in probabilistic forms. This formulation, also referred to as reliability-based design optimization (RBDO), has gained extensive attention in recent years. Most researchers assume that reliability levels are given based on past experiences or other design considerations without exploring the constrained space. Therefore, inappropriate target reliability levels might be assigned, which either result in a null probabilistic feasible space or performance underestimations. In this research, we investigate the maximal reliability within a probabilistic constrained space using modified efficient global optimization (EGO) algorithm. By constructing and improving Kriging models iteratively, EGO can obtain a global optimum of a possibly disconnected feasible space at high reliability levels. An infill sampling criterion (ISC) is proposed to enforce added samples on the constraint boundaries to improve the accuracy of probabilistic constraint evaluations via Monte Carlo simulations. This limit state ISC is combined with the existing ISC to form a heuristic approach that efficiently improves the Kriging models. For optimization problems with expensive functions and disconnected feasible space, such as the maximal reliability problems in RBDO, the efficiency of the proposed approach in finding the optimum is higher than those of existing gradient-based and direct search methods. Several examples are used to demonstrate the proposed methodology.


Sign in / Sign up

Export Citation Format

Share Document