unconstrained optimization problems
Recently Published Documents


TOTAL DOCUMENTS

304
(FIVE YEARS 94)

H-INDEX

20
(FIVE YEARS 2)

2022 ◽  
Vol 20 ◽  
pp. 736-744
Author(s):  
Olawale J. Adeleke ◽  
Idowu A. Osinuga ◽  
Raufu A. Raji

In this paper, a new conjugate gradient (CG) parameter is proposed through the convex combination of the Fletcher-Reeves (FR) and Polak-Ribiére-Polyak (PRP) CG update parameters such that the conjugacy condition of Dai-Liao is satisfied. The computational efficiency of the PRP method and the convergence profile of the FR method motivated the choice of these two CG methods. The corresponding CG algorithm satisfies the sufficient descent property and was shown to be globally convergent under the strong Wolfe line search procedure. Numerical tests on selected benchmark test functions show that the algorithm is efficient and very competitive in comparison with some existing classical methods.


2021 ◽  
pp. 543-550
Author(s):  
Nataliya Boyko ◽  
Andriy Pytel

Lately, artificial intelligence has become increasingly popular. Still, at the same time, a stereotype has been formed that AI is based solely on neural networks, even though a neural network is only one of the numerous directions of artificial intelligence. This paper aims to bring attention to other directions of AI, such as genetic algorithms. In this paper, we study the process of solving the travelling salesman problem (TSP) via genetic algorithms (GA) and consider the issues of this method. The genetic algorithm is a method for solving both constrained and unconstrained optimization problems that are based on natural selection, the process that drives biological evolution. One of the common problems in programming is the travelling salesman problem. Many methods can be used to solve it, but we are going consider genetic algorithms. This study aims at developing the most efficient application of genetic algorithms in the travelling salesman problem.


2021 ◽  
Author(s):  
Min-Rong Chen ◽  
Liu-Qing Yang ◽  
Guo-Qiang Zeng ◽  
Kang-Di Lu ◽  
Yi-Yuan Huang

Abstract As one of the evolutionary algorithms, firefly algorithm (FA) has been widely used to solve various complex optimization problems. However, FA has significant drawbacks in slow convergence rate and is easily trapped into local optimum. To tackle these defects, this paper proposes an improved FA combined with extremal optimization (EO), named IFA-EO, where three strategies are incorporated. First, to balance the tradeoff between exploration ability and exploitation ability, we adopt a new attraction model for FA operation, which combines the full attraction model and the single attraction model through the probability choice strategy. In the single attraction model, small probability accepts the worse solution to improve the diversity of the offspring. Second, the adaptive step size is proposed based on the number of iterations to dynamically adjust the attention to the exploration model or exploitation model. Third, we combine an EO algorithm with powerful ability in local-search into FA. Experiments are tested on two group popular benchmarks including complex unimodal and multimodal functions. Our experimental results demonstrate that the proposed IFA-EO algorithm can deal with various complex optimization problems and has similar or better performance than the other eight FA variants, three EO-based algorithms, and one advanced differential evolution variant in terms of accuracy and statistical results.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2093
Author(s):  
Huiping Cao ◽  
Xiaomin An

In our paper, we introduce a sparse and symmetric matrix completion quasi-Newton model using automatic differentiation, for solving unconstrained optimization problems where the sparse structure of the Hessian is available. The proposed method is a kind of matrix completion quasi-Newton method and has some nice properties. Moreover, the presented method keeps the sparsity of the Hessian exactly and satisfies the quasi-Newton equation approximately. Under the usual assumptions, local and superlinear convergence are established. We tested the performance of the method, showing that the new method is effective and superior to matrix completion quasi-Newton updating with the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method and the limited-memory BFGS method.


2021 ◽  
Vol 5 (3) ◽  
pp. 110
Author(s):  
Shashi Kant Mishra ◽  
Predrag Rajković ◽  
Mohammad Esmael Samei ◽  
Suvra Kanti Chakraborty ◽  
Bhagwat Ram ◽  
...  

We present an algorithm for solving unconstrained optimization problems based on the q-gradient vector. The main idea used in the algorithm construction is the approximation of the classical gradient by a q-gradient vector. For a convex objective function, the quasi-Fejér convergence of the algorithm is proved. The proposed method does not require the boundedness assumption on any level set. Further, numerical experiments are reported to show the performance of the proposed method.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255269
Author(s):  
Muhammad Zubair Rehman ◽  
Abdullah Khan ◽  
Rozaida Ghazali ◽  
Muhammad Aamir ◽  
Nazri Mohd Nawi

The Sine-Cosine algorithm (SCA) is a population-based metaheuristic algorithm utilizing sine and cosine functions to perform search. To enable the search process, SCA incorporates several search parameters. But sometimes, these parameters make the search in SCA vulnerable to local minima/maxima. To overcome this problem, a new Multi Sine-Cosine algorithm (MSCA) is proposed in this paper. MSCA utilizes multiple swarm clusters to diversify & intensify the search in-order to avoid the local minima/maxima problem. Secondly, during update MSCA also checks for better search clusters that offer convergence to global minima effectively. To assess its performance, we tested the MSCA on unimodal, multimodal and composite benchmark functions taken from the literature. Experimental results reveal that the MSCA is statistically superior with regards to convergence as compared to recent state-of-the-art metaheuristic algorithms, including the original SCA.


Author(s):  
Ibrahim Mohammed Sulaiman ◽  
Norsuhaily Abu Bakar ◽  
Mustafa Mamat ◽  
Basim A. Hassan ◽  
Maulana Malik ◽  
...  

The hybrid conjugate gradient (CG) method is among the efficient variants of CG method for solving optimization problems. This is due to their low memory requirements and nice convergence properties. In this paper, we present an efficient hybrid CG method for solving unconstrained optimization models and show that the method satisfies the sufficient descent condition. The global convergence prove of the proposed method would be established under inexact line search. Application of the proposed method to the famous statistical regression model describing the global outbreak of the novel COVID-19 is presented. The study parameterized the model using the weekly increase/decrease of recorded cases from December 30, 2019 to March 30, 2020. Preliminary numerical results on some unconstrained optimization problems show that the proposed method is efficient and promising. Furthermore, the proposed method produced a good regression equation for COVID-19 confirmed cases globally.


Sign in / Sign up

Export Citation Format

Share Document