Internal modelling of objective functions for global optimization

1986 ◽  
Vol 51 (2) ◽  
pp. 345-353 ◽  
Author(s):  
I. P. Schagen
2018 ◽  
Vol 26 (4) ◽  
pp. 569-596 ◽  
Author(s):  
Yuping Wang ◽  
Haiyan Liu ◽  
Fei Wei ◽  
Tingting Zong ◽  
Xiaodong Li

For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations “[Formula: see text]”, “[Formula: see text]”, “[Formula: see text]”, “[Formula: see text]” and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.


2013 ◽  
Vol 55 (2) ◽  
pp. 109-128 ◽  
Author(s):  
B. L. ROBERTSON ◽  
C. J. PRICE ◽  
M. REALE

AbstractA stochastic algorithm for bound-constrained global optimization is described. The method can be applied to objective functions that are nonsmooth or even discontinuous. The algorithm forms a partition on the search region using classification and regression trees (CART), which defines a region where the objective function is relatively low. Further points are drawn directly from the low region before a new partition is formed. Alternating between partition and sampling phases provides an effective method for nonsmooth global optimization. The sequence of iterates generated by the algorithm is shown to converge to an essential global minimizer with probability one under mild conditions. Nonprobabilistic results are also given when random sampling is replaced with points taken from the Halton sequence. Numerical results are presented for both smooth and nonsmooth problems and show that the method is effective and competitive in practice.


Geophysics ◽  
1985 ◽  
Vol 50 (12) ◽  
pp. 2784-2796 ◽  
Author(s):  
Daniel H. Rothman

Nonlinear inverse problems are usually solved with linearized techniques that depend strongly on the accuracy of initial estimates of the model parameters. With linearization, objective functions can be minimized efficiently, but the risk of local rather than global optimization can be severe. I address the problem confronted in nonlinear inversion when no good initial guess of the model parameters can be made. The fully nonlinear approach presented is rooted in statistical mechanics. Although a large nonlinear problem might appear computationally intractable without linearization, reformulation of the same problem into smaller, interdependent parts can lead to tractable computation while preserving nonlinearities. I formulate inversion as a problem of Bayesian estimation, in which the prior probability distribution is the Gibbs distribution of statistical mechanics. Solutions are then obtained by maximizing the posterior probability of the model parameters. Optimization is performed with a Monte Carlo technique that was originally introduced to simulate the statistical mechanics of systems in equilibrium. The technique is applied to residual statics estimation when statics are unusually large and data are contaminated by noise. Poorly picked correlations (“cycle skips” or “leg jumps”) appear as local minima of the objective function, but global optimization is successfully performed. Further applications to deconvolution and velocity estimation are proposed.


Author(s):  
Antanas Žilinskas

The single-objective P-algorithm is a global optimization algorithm based on a statistical mod- el of objective functions and the axiomatic theory of rational decisions. It has been proven quite suitable for optimization of black-box expensive functions. Recently the P-algorithm has been generalized to multi-objective optimization. In the present paper, the implementation of that algorithm is considered using the new computing paradigm of the arithmetic of infinity. A strong homogeneity of the multi-objective P-algorithm is proven, thus enabling rather a simple application of the algorithm to the problems involving infinities and infinitesimals.


2019 ◽  
Vol 23 (1) ◽  
pp. 351-369 ◽  
Author(s):  
Guillaume Pirot ◽  
Tipaluck Krityakierne ◽  
David Ginsbourger ◽  
Philippe Renard

Abstract. Contaminant source localization problems require efficient and robust methods that can account for geological heterogeneities and accommodate relatively small data sets of noisy observations. As realism commands hi-fidelity simulations, computation costs call for global optimization algorithms under parsimonious evaluation budgets. Bayesian optimization approaches are well adapted to such settings as they allow the exploration of parameter spaces in a principled way so as to iteratively locate the point(s) of global optimum while maintaining an approximation of the objective function with an instrumental quantification of prediction uncertainty. Here, we adapt a Bayesian optimization approach to localize a contaminant source in a discretized spatial domain. We thus demonstrate the potential of such a method for hydrogeological applications and also provide test cases for the optimization community. The localization problem is illustrated for cases where the geology is assumed to be perfectly known. Two 2-D synthetic cases that display sharp hydraulic conductivity contrasts and specific connectivity patterns are investigated. These cases generate highly nonlinear objective functions that present multiple local minima. A derivative-free global optimization algorithm relying on a Gaussian process model and on the expected improvement criterion is used to efficiently localize the point of minimum of the objective functions, which corresponds to the contaminant source location. Even though concentration measurements contain a significant level of proportional noise, the algorithm efficiently localizes the contaminant source location. The variations of the objective function are essentially driven by the geology, followed by the design of the monitoring well network. The data and scripts used to generate objective functions are shared to favor reproducible research. This contribution is important because the functions present multiple local minima and are inspired from a practical field application. Sharing these complex objective functions provides a source of test cases for global optimization benchmarks and should help with designing new and efficient methods to solve this type of problem.


2012 ◽  
Vol 16 (3) ◽  
pp. 873-891 ◽  
Author(s):  
W. J. Vanhaute ◽  
S. Vandenberghe ◽  
K. Scheerlinck ◽  
B. De Baets ◽  
N. E. C. Verhoest

Abstract. The calibration of stochastic point process rainfall models, such as of the Bartlett-Lewis type, suffers from the presence of multiple local minima which local search algorithms usually fail to avoid. To meet this shortcoming, four relatively new global optimization methods are presented and tested for their ability to calibrate the Modified Bartlett-Lewis Model. The list of tested methods consists of: the Downhill Simplex Method, Simplex-Simulated Annealing, Particle Swarm Optimization and Shuffled Complex Evolution. The parameters of these algorithms are first optimized to ensure optimal performance, after which they are used for calibration of the Modified Bartlett-Lewis model. Furthermore, this paper addresses the choice of weights in the objective function. Three alternative weighing methods are compared to determine whether or not simulation results (obtained after calibration with the best optimization method) are influenced by the choice of weights.


Sign in / Sign up

Export Citation Format

Share Document