griewank function
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 3)

H-INDEX

2
(FIVE YEARS 1)

2021 ◽  
Vol 7 (6) ◽  
pp. 55341-55350
Author(s):  
Carlos Eduardo Rambalducci Dalla ◽  
Wellington Betencurte da Silva ◽  
Júlio Cesar Sampaio Dutra ◽  
Marcelo José Colaço

Optimization methods are frequently applied to solve real-world problems such, engineering design, computer science, and computational chemistry. This paper aims to compare gradient-based algorithms and the meta-heuristic particle swarm optimization to minimize the multidimensional benchmark Griewank function, a multimodal function with widespread local minima. Several approaches of gradient-based methods such as steepest descent, conjugate gradient with Fletcher-Reeves and Polak-Ribiere formulations, and quasi-Newton Davidon-Fletcher-Powell approach were compared. The results presented showed that the meta-heuristic method is recommended for function with this behavior because is no needed prior information of the search space. The performance comparison includes computation time and convergence of global and local optimum.


2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Yongli Zhang ◽  
Jianguang Niu ◽  
Sanggyun Na

The nonlinear function fitting is an essential research issue. At present, the main function fitting methods are statistical methods and artificial neural network, but statistical methods have many inherent strict limits in application, and the back propagation (BP) neural network used widely has too many optimized parameters. For the gaps and lacks of existing researches, the FOA-GRNN was proposed and compared with the GRNN, GA-BP, PSO-BP, and BP through three nonlinear functions from simplicity to complexity for verifying the accuracy and robustness of the FOA-GRNN. The experiment results showed that the FOA-GRNN had the best fitting precision and fastest convergence speed; meanwhile the predictions were stable and reliable in the Mexican Hat function and Rastrgrin function. In the most complex Griewank function, the prediction of FOA-GRNN was becoming unstable and the model did not show better than GRNN model adopting equal step length searching method, but the performance of FOA-GRNN is superior to that of GA-BP, PSO-BP, and BP. The paper presents a new approach to optimize the parameter of GRNN and also provides a new nonlinear function fitting method, which has better fitting precision, faster calculation speed, more few adjusted parameters, and more powerful processing ability for small samples. The processing capacity of FOA for treating high complex nonlinear function needs to be further improved and developed in the future study.


2018 ◽  
Vol 232 ◽  
pp. 03015
Author(s):  
Changjun Wen ◽  
Changlian Liu ◽  
Heng Zhang ◽  
Hongliang Wang

The particle swarm optimization (PSO) is a widely used tool for solving optimization problems in the field of engineering technology. However, PSO is likely to fall into local optimum, which has the disadvantages of slow convergence speed and low convergence precision. In view of the above shortcomings, a particle swarm optimization with Gaussian disturbance is proposed. With introducing the Gaussian disturbance in the self-cognition part and social cognition part of the algorithm, this method can improve the convergence speed and precision of the algorithm, which can also improve the ability of the algorithm to escape the local optimal solution. The algorithm is simulated by Griewank function after the several evolutionary modes of GDPSO algorithm are analyzed. The experimental results show that the convergence speed and the optimization precision of the GDPSO is better than that of PSO.


2008 ◽  
Vol 204 (2) ◽  
pp. 694-701 ◽  
Author(s):  
Huidae Cho ◽  
Francisco Olivera ◽  
Seth D. Guikema
Keyword(s):  

2008 ◽  
Vol 2008 ◽  
pp. 1-15 ◽  
Author(s):  
J. L. Fernández Martínez ◽  
E. García Gonzalo

A generalized form of the particle swarm optimization (PSO) algorithm is presented. Generalized PSO (GPSO) is derived from a continuous version of PSO adopting a time step different than the unit. Generalized continuous particle swarm optimizations are compared in terms of attenuation and oscillation. The deterministic and stochastic stability regions and their respective asymptotic velocities of convergence are analyzed as a function of the time step and the GPSO parameters. The sampling distribution of the GPSO algorithm helps to study the effect of stochasticity on the stability of trajectories. The stability regions for the second-, third-, and fourth-order moments depend on inertia, local, and global accelerations and the time step and are inside of the deterministic stability region for the same time step. We prove that stability regions are the same under stagnation and with a moving center of attraction. Properties of the second-order moments variance and covariance serve to propose some promising parameter sets. High variance and temporal uncorrelation improve the exploration task while solving ill-posed inverse problems. Finally, a comparison is made between PSO and GPSO by means of numerical experiments using well-known benchmark functions with two types of ill-posedness commonly found in inverse problems: the Rosenbrock and the “elongated” DeJong functions (global minimum located in a very flat area), and the Griewank function (global minimum surrounded by multiple minima). Numerical simulations support the results provided by theoretical analysis. Based on these results, two variants of Generalized PSO algorithm are proposed, improving the convergence and the exploration task while solving real applications of inverse problems.


Sign in / Sign up

Export Citation Format

Share Document