benchmark function
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 10)

H-INDEX

4
(FIVE YEARS 1)

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 262
Author(s):  
Jing Nan ◽  
Zhonghua Jian ◽  
Chuanfeng Ning ◽  
Wei Dai

Stochastic configuration networks (SCNs) face time-consuming issues when dealing with complex modeling tasks that usually require a mass of hidden nodes to build an enormous network. An important reason behind this issue is that SCNs always employ the Moore–Penrose generalized inverse method with high complexity to update the output weights in each increment. To tackle this problem, this paper proposes a lightweight SCNs, called L-SCNs. First, to avoid using the Moore–Penrose generalized inverse method, a positive definite equation is proposed to replace the over-determined equation, and the consistency of their solution is proved. Then, to reduce the complexity of calculating the output weight, a low complexity method based on Cholesky decomposition is proposed. The experimental results based on both the benchmark function approximation and real-world problems including regression and classification applications show that L-SCNs are sufficiently lightweight.


2020 ◽  
Vol 34 (03) ◽  
pp. 2376-2383
Author(s):  
Andrei Lissovoi ◽  
Pietro Oliveto ◽  
John Alasdair Warwicker

Recent analyses have shown that a random gradient hyper-heuristic (HH) using randomised local search (RLSk) low-level heuristics with different neighbourhood sizes k can optimise the unimodal benchmark function LeadingOnes in the best expected time achievable with the available heuristics, if sufficiently long learning periods τ are employed. In this paper, we examine the impact of the learning period on the performance of the hyper-heuristic for standard unimodal benchmark functions with different characteristics: Ridge, where the HH has to learn that RLS1 is always the best low-level heuristic, and OneMax, where different low-level heuristics are preferable in different areas of the search space. We rigorously prove that super-linear learning periods τ are required for the HH to achieve optimal expected runtime for Ridge. Conversely, a sub-logarithmic learning period is the best static choice for OneMax, while using super-linear values for τ increases the expected runtime above the asymptotic unary unbiased black box complexity of the problem. We prove that a random gradient HH which automatically adapts the learning period throughout the run has optimal asymptotic expected runtime for both OneMax and Ridge. Additionally, we show experimentally that it outperforms any static learning period for realistic problem sizes.


Enfoque UTE ◽  
2019 ◽  
Vol 10 (3) ◽  
pp. 67-80
Author(s):  
Dannyll Michellc Zambrano Zambrano ◽  
Darío Vélez ◽  
Yohanna Daza ◽  
José Manuel Palomares

This paper presents the social foraging behavior of Escherichia coli (E. Coli) bacteria based on Bacteria Foraging Optimization algorithms (BFOA) to find optimization and distributed control values. The search strategy for E. coli is very complex to express and the dynamics of the simulated chemotaxis stage in BFOA is analyzed with the help of a simple mathematical model. The methodology starts from a detailed analysis of the parameters of bacterial swimming and tumbling (C) and the probability of elimination and dispersion (Ped), then an adaptive variant of BFOA is proposed, in which the size of the chemotherapeutic step is adjusted according to the current suitability of a virtual bacterium. To evaluate the performance of the algorithm in obtaining optimal values, the resolution was applied to one of the benchmark functions, in this case the Ackley minimization function, a comparative analysis of the BFOA is then performed. The simulation results have shown the validity of the optimal values (minimum or maximum) obtained on a specific function for real world problems, with a function belonging to the benchmark group of optimization functions.


2019 ◽  
Vol 25 (3) ◽  
pp. 227-237
Author(s):  
Lihao Zhang ◽  
Zeyang Ye ◽  
Yuefan Deng

Abstract We introduce a parallel scheme for simulated annealing, a widely used Markov chain Monte Carlo (MCMC) method for optimization. Our method is constructed and analyzed under the classical framework of MCMC. The benchmark function for optimization is used for validation and verification of the parallel scheme. The experimental results, along with the proof based on statistical theory, provide us with insights into the mechanics of the parallelization of simulated annealing for high parallel efficiency or scalability for large parallel computers.


Author(s):  
Rui Leng ◽  
Aijia Ouyang ◽  
Yanmin Liu ◽  
Lian Yuan ◽  
Zongyue Wu

In modern intelligent algorithms and real-industrial applications, there are many fields involving multi-objective particle swarm optimization algorithms, but the conflict between each objective in the optimization process will easily lead to the algorithm falling into local optimal. In order to prevent the algorithm from quickly falling into local optimization and improve the robustness of the algorithm, a multi-objective particle swarm optimization algorithm based on grid distance (GDMOPSO) was proposed, which has to improve the diversity of the algorithm and the search ability. Based on the MOPSO algorithm, a new external archive control strategy was established by using the grid technology and Pareto-dominant ordering principle, and the learning samples were improved. The proposed GDMOPSO is compared with a group of benchmark function tests and four classical algorithms. The results of experiment show that our proposed algorithm can effectively avoid premature convergence in terms of generational distance and hyper-volume (HV) indicator compared with other four classical MOPSO algorithms.


Author(s):  
Andrei Lissovoi ◽  
Pietro S. Oliveto ◽  
John Alasdair Warwicker

Selection hyper-heuristics are automated algorithm selection methodologies that choose between different heuristics during the optimisation process. Recently selection hyperheuristics choosing between a collection of elitist randomised local search heuristics with different neighbourhood sizes have been shown to optimise a standard unimodal benchmark function from evolutionary computation in the optimal expected runtime achievable with the available low-level heuristics. In this paper we extend our understanding to the domain of multimodal optimisation by considering a hyper-heuristic from the literature that can switch between elitist and nonelitist heuristics during the run. We first identify the range of parameters that allow the hyper-heuristic to hillclimb efficiently and prove that it can optimise a standard hillclimbing benchmark function in the best expected asymptotic time achievable by unbiased mutation-based randomised search heuristics. Afterwards, we use standard multimodal benchmark functions to highlight function characteristics where the hyper-heuristic is efficient by swiftly escaping local optima and ones where it is not. For a function class called CLIFFd where a new gradient of increasing fitness can be identified after escaping local optima, the hyper-heuristic is extremely efficient while a wide range of established elitist and non-elitist algorithms are not, including the well-studied Metropolis algorithm. We complete the picture with an analysis of another standard benchmark function called JUMPd as an example to highlight problem characteristics where the hyper-heuristic is inefficient. Yet, it still outperforms the wellestablished non-elitist Metropolis algorithm.


Sign in / Sign up

Export Citation Format

Share Document