scholarly journals Gradient-Based Cuckoo Search for Global Optimization

2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Seif-Eddeen K. Fateen ◽  
Adrián Bonilla-Petriciolet

One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global optimization method. We introduced the gradient-based cuckoo search (GBCS) and evaluated its performance vis-à-vis the original algorithm in solving twenty-four benchmark functions. The use of GBCS improved reliability and effectiveness of the algorithm in all but four of the tested benchmark problems. GBCS proved to be a strong candidate for solving difficult optimization problems, for which the gradient of the objective function is readily available.

2021 ◽  
Author(s):  
Himanshu Verma ◽  
Yogendra Kumar

Cuckoo search algorithm is one the efficient algorithms in solving global optimization problems.


2018 ◽  
Vol 35 (1) ◽  
pp. 71-90 ◽  
Author(s):  
Xiwen Cai ◽  
Haobo Qiu ◽  
Liang Gao ◽  
Xiaoke Li ◽  
Xinyu Shao

Purpose This paper aims to propose hybrid global optimization based on multiple metamodels for improving the efficiency of global optimization. Design/methodology/approach The method has fully utilized the information provided by different metamodels in the optimization process. It not only imparts the expected improvement criterion of kriging into other metamodels but also intelligently selects appropriate metamodeling techniques to guide the search direction, thus making the search process very efficient. Besides, the corresponding local search strategies are also put forward to further improve the optimizing efficiency. Findings To validate the method, it is tested by several numerical benchmark problems and applied in two engineering design optimization problems. Moreover, an overall comparison between the proposed method and several other typical global optimization methods has been made. Results show that the global optimization efficiency of the proposed method is higher than that of the other methods for most situations. Originality/value The proposed method sufficiently utilizes multiple metamodels in the optimizing process. Thus, good optimizing results are obtained, showing great applicability in engineering design optimization problems which involve costly simulations.


2021 ◽  
Author(s):  
Himanshu Verma ◽  
Yogendra Kumar

Cuckoo search algorithm is one the efficient algorithms in solving global optimization problems.


2019 ◽  
Vol 8 (4) ◽  
pp. 9465-9471

This paper presents a novel technique based on Cuckoo Search Algorithm (CSA) for enhancing the performance of multiline transmission network to reduce congestion in transmission line to huge level. Optimal location selection of IPFC is done using subtracting line utilization factor (SLUF) and CSA-based optimal tuning. The multi objective function consists of real power loss, security margin, bus voltage limit violation and capacity of installed IPFC. The multi objective function is tuned by CSA and the optimal location for minimizing transmission line congestion is obtained. The simulation is performed using MATLAB for IEEE 30-bus test system. The performance of CSA has been considered for various loading conditions. Results shows that the proposed CSA technique performs better by optimal location of IPFC while maintaining power system performance


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1840
Author(s):  
Nicolás Caselli ◽  
Ricardo Soto ◽  
Broderick Crawford ◽  
Sergio Valdivia ◽  
Rodrigo Olivares

Metaheuristics are intelligent problem-solvers that have been very efficient in solving huge optimization problems for more than two decades. However, the main drawback of these solvers is the need for problem-dependent and complex parameter setting in order to reach good results. This paper presents a new cuckoo search algorithm able to self-adapt its configuration, particularly its population and the abandon probability. The self-tuning process is governed by using machine learning, where cluster analysis is employed to autonomously and properly compute the number of agents needed at each step of the solving process. The goal is to efficiently explore the space of possible solutions while alleviating human effort in parameter configuration. We illustrate interesting experimental results on the well-known set covering problem, where the proposed approach is able to compete against various state-of-the-art algorithms, achieving better results in one single run versus 20 different configurations. In addition, the result obtained is compared with similar hybrid bio-inspired algorithms illustrating interesting results for this proposal.


Sign in / Sign up

Export Citation Format

Share Document