Convex Estimators for Optimization of Kriging Model Problems

Author(s):  
Karim Hamza ◽  
Mohammed Shalaby

This paper presents a framework for identification of the global optimum of Kriging models. The framework is based on a branch and bound scheme for sub-division of the search space into hypercubes while constructing convex under-estimators of the Kriging models. The convex under-estimators, which are a key development in this paper, provide a relaxation of the original problem. The relaxed problem has two key features: i) convex optimization algorithms such as sequential quadratic programming (SQP) are guaranteed to find the global optimum of the relaxed problem, and ii) objective value of the relaxed problem is a lower bound on the best attainable solution within a hypercube for the original (Kriging model) problem. The convex under-estimators improve in accuracy as the size of a hypercube gets smaller via the branching search. Termination of a hypercube branch is done when either: i) solution of the relaxed problem within the hypercube is no better than current best solution of the original problem, or ii) best solution of the original problem and that of the relaxed problem are within tolerance limits. To assess the significance of the proposed framework, comparison studies against genetic algorithm (GA) are conducted using Kriging models that approximate standard nonlinear test functions, as well as application problems of water desalination and vehicle crashworthiness. Results of the studies show the proposed framework deterministically providing a solution within tolerance limits from the global optimum, while GA is observed to not reliably discover the best solutions in problems with larger number of design variables.

2012 ◽  
Vol 134 (11) ◽  
Author(s):  
Karim Hamza ◽  
Mohammed Shalaby

This paper presents a framework for identification of the global optimum of Kriging models that have been tuned to approximate the response of some generic objective function and constraints. The framework is based on a branch and bound scheme for subdivision of the search space into hypercubes while constructing convex underestimators of the Kriging models. The convex underestimators, which are the key development in this paper, provide a relaxation of the original problem. The relaxed problem has two main features: (i) convex optimization algorithms such as sequential quadratic programming (SQP) are guaranteed to find the global optimum of the relaxed problem and (ii) objective value of the relaxed problem is a lower bound within a hypercube for the original (Kriging model) problem. As accuracy of the convex estimators improves with subdivision of a hypercube, termination of a branch happens when either: (i) solution of the relaxed problem within the hypercube is no better than current best solution of the original problem or (ii) best solution of the original problem and that of the relaxed problem are within tolerance limits. To assess the significance of the proposed framework, comparison studies against genetic algorithm (GA), particle swarm optimization (PSO), random multistart sequential quadratic programming (mSQP), and DIRECT are conducted. The studies include four standard nonlinear test functions and two design application problems of water desalination and vehicle crashworthiness. The studies show the proposed framework deterministically finding the optimum for all the test problems. Among the tested stochastic search techniques (GA, PSO, mSQP), mSQP had the best performance as it consistently found the optimum in less computational time than the proposed approach except on the water desalination problem. DIRECT deterministically found the optima for the nonlinear test functions, but completely failed to find it for the water desalination and vehicle crashworthiness problems.


2014 ◽  
Vol 2014 ◽  
pp. 1-14 ◽  
Author(s):  
Hui Lu ◽  
Zheng Zhu ◽  
Xiaoteng Wang ◽  
Lijuan Yin

Test task scheduling problem (TTSP) is a typical combinational optimization scheduling problem. This paper proposes a variable neighborhood MOEA/D (VNM) to solve the multiobjective TTSP. Two minimization objectives, the maximal completion time (makespan) and the mean workload, are considered together. In order to make solutions obtained more close to the real Pareto Front, variable neighborhood strategy is adopted. Variable neighborhood approach is proposed to render the crossover span reasonable. Additionally, because the search space of the TTSP is so large that many duplicate solutions and local optima will exist, the Starting Mutation is applied to prevent solutions from becoming trapped in local optima. It is proved that the solutions got by VNM can converge to the global optimum by using Markov Chain and Transition Matrix, respectively. The experiments of comparisons of VNM, MOEA/D, and CNSGA (chaotic nondominated sorting genetic algorithm) indicate that VNM performs better than the MOEA/D and the CNSGA in solving the TTSP. The results demonstrate that proposed algorithm VNM is an efficient approach to solve the multiobjective TTSP.


Author(s):  
Guangyu Zhou ◽  
Aijia Ouyang ◽  
Yuming Xu

To overcome the shortcomings of the basic glowworm swarm optimization (GSO) algorithm, such as low accuracy, slow convergence speed and easy to fall into local minima, chaos algorithm and cloud model algorithm are introduced to optimize the evolution mechanism of GSO, and a chaos GSO algorithm based on cloud model (CMCGSO) is proposed in the paper. The simulation results of benchmark function of global optimization show that the CMCGSO algorithm performs better than the cuckoo search (CS), invasive weed optimization (IWO), hybrid particle swarm optimization (HPSO), and chaos glowworm swarm optimization (CGSO) algorithm, and CMCGSO has the advantages of high accuracy, fast convergence speed and strong robustness to find the global optimum. Finally, the CMCGSO algorithm is used to solve the problem of face recognition, and the results are better than the methods from literatures.


2004 ◽  
Vol 4 (3) ◽  
pp. 201-206
Author(s):  
L. Grover ◽  
T. Rudolph

Quantum search is a technique for searching $N$ possibilities for a desired target in $O(\sqrt{N})$ steps. It has been applied in the design of quantum algorithms for several structured problems. Many of these algorithms require significant amount of quantum hardware. In this paper we propose the criterion that an algorithm which requires $O(S)$ hardware should be considered significant if it produces a speedup of better than $O\left(\sqrt{S}\right)$ over a simple quantum search algorithm. This is because a speedup of $O\left(\sqrt{S}\right)$ can be trivially obtained by dividing the search space into $S$ separate parts and handing the problem to $S$ independent processors that do a quantum search (in this paper we drop all logarithmic factors when discussing time/space complexity). Known algorithms for collision and element distinctness exactly saturate the criterion.


Author(s):  
George H. Cheng ◽  
Adel Younis ◽  
Kambiz Haji Hajikolaei ◽  
G. Gary Wang

Mode Pursuing Sampling (MPS) was developed as a global optimization algorithm for optimization problems involving expensive black box functions. MPS has been found to be effective and efficient for problems of low dimensionality, i.e., the number of design variables is less than ten. A previous conference publication integrated the concept of trust regions into the MPS framework to create a new algorithm, TRMPS, which dramatically improved performance and efficiency for high dimensional problems. However, although TRMPS performed better than MPS, it was unproven against other established algorithms such as GA. This paper introduces an improved algorithm, TRMPS2, which incorporates guided sampling and low function value criterion to further improve algorithm performance for high dimensional problems. TRMPS2 is benchmarked against MPS and GA using a suite of test problems. The results show that TRMPS2 performs better than MPS and GA on average for high dimensional, expensive, and black box (HEB) problems.


2017 ◽  
Vol 10 (2) ◽  
pp. 67
Author(s):  
Vina Ayumi ◽  
L.M. Rasdi Rere ◽  
Mohamad Ivan Fanany ◽  
Aniati Murni Arymurthy

Metaheuristic algorithm is a powerful optimization method, in which it can solve problemsby exploring the ordinarily large solution search space of these instances, that are believed tobe hard in general. However, the performances of these algorithms signicantly depend onthe setting of their parameter, while is not easy to set them accurately as well as completelyrelying on the problem's characteristic. To ne-tune the parameters automatically, manymethods have been proposed to address this challenge, including fuzzy logic, chaos, randomadjustment and others. All of these methods for many years have been developed indepen-dently for automatic setting of metaheuristic parameters, and integration of two or more ofthese methods has not yet much conducted. Thus, a method that provides advantage fromcombining chaos and random adjustment is proposed. Some popular metaheuristic algo-rithms are used to test the performance of the proposed method, i.e. simulated annealing,particle swarm optimization, dierential evolution, and harmony search. As a case study ofthis research is contrast enhancement for images of Cameraman, Lena, Boat and Rice. Ingeneral, the simulation results show that the proposed methods are better than the originalmetaheuristic, chaotic metaheuristic, and metaheuristic by random adjustment.


2016 ◽  
Vol 19 (1) ◽  
pp. 115-122 ◽  
Author(s):  
Milan Cisty ◽  
Zbynek Bajtek ◽  
Lubomir Celar

In this work, an optimal design of a water distribution network is proposed for large irrigation networks. The proposed approach is built upon an existing optimization method (NSGA-II), but the authors are proposing its effective application in a new two-step optimization process. The aim of the paper is to demonstrate that not only is the choice of method important for obtaining good optimization results, but also how that method is applied. The proposed methodology utilizes as its most important feature the ensemble approach, in which more optimization runs cooperate and are used together. The authors assume that the main problem in finding the optimal solution for a water distribution optimization problem is the very large size of the search space in which the optimal solution should be found. In the proposed method, a reduction of the search space is suggested, so the final solution is thus easier to find and offers greater guarantees of accuracy (closeness to the global optimum). The method has been successfully tested on a large benchmark irrigation network.


2019 ◽  
Vol 15 ◽  
pp. 117693431882053 ◽  
Author(s):  
Pavel Avdeyev ◽  
Shuai Jiang ◽  
Max A Alekseyev

Reconstruction of the median genome consisting of linear chromosomes from three given genomes is known to be intractable. There exist efficient methods for solving a relaxed version of this problem, where the median genome is allowed to have circular chromosomes. We propose a method for construction of an approximate solution to the original problem from a solution to the relaxed problem and prove a bound on its approximation error. Our method also provides insights into the combinatorial structure of genome transformations with respect to appearance of circular chromosomes.


2020 ◽  
Vol 54 (3) ◽  
pp. 275-296 ◽  
Author(s):  
Najmeh Sadat Jaddi ◽  
Salwani Abdullah

PurposeMetaheuristic algorithms are classified into two categories namely: single-solution and population-based algorithms. Single-solution algorithms perform local search process by employing a single candidate solution trying to improve this solution in its neighborhood. In contrast, population-based algorithms guide the search process by maintaining multiple solutions located in different points of search space. However, the main drawback of single-solution algorithms is that the global optimum may not reach and it may get stuck in local optimum. On the other hand, population-based algorithms with several starting points that maintain the diversity of the solutions globally in the search space and results are of better exploration during the search process. In this paper more chance of finding global optimum is provided for single-solution-based algorithms by searching different regions of the search space.Design/methodology/approachIn this method, different starting points in initial step, searching locally in neighborhood of each solution, construct a global search in search space for the single-solution algorithm.FindingsThe proposed method was tested based on three single-solution algorithms involving hill-climbing (HC), simulated annealing (SA) and tabu search (TS) algorithms when they were applied on 25 benchmark test functions. The results of the basic version of these algorithms were then compared with the same algorithms integrated with the global search proposed in this paper. The statistical analysis of the results proves outperforming of the proposed method. Finally, 18 benchmark feature selection problems were used to test the algorithms and were compared with recent methods proposed in the literature.Originality/valueIn this paper more chance of finding global optimum is provided for single-solution-based algorithms by searching different regions of the search space.


Sign in / Sign up

Export Citation Format

Share Document