scholarly journals The exact worst-case convergence rate of the gradient method with fixed step lengths for L-smooth functions

Author(s):  
Hadi Abbaszadehpeivasti ◽  
Etienne de Klerk ◽  
Moslem Zamani

AbstractIn this paper, we study the convergence rate of the gradient (or steepest descent) method with fixed step lengths for finding a stationary point of an L-smooth function. We establish a new convergence rate, and show that the bound may be exact in some cases, in particular when all step lengths lie in the interval (0, 1/L]. In addition, we derive an optimal step length with respect to the new bound.

Author(s):  
Ran Gu ◽  
Qiang Du

Abstract How to choose the step size of gradient descent method has been a popular subject of research. In this paper we propose a modified limited memory steepest descent method (MLMSD). In each iteration we propose a selection rule to pick a unique step size from a candidate set, which is calculated by Fletcher’s limited memory steepest descent method (LMSD), instead of going through all the step sizes in a sweep, as in Fletcher’s original LMSD algorithm. MLMSD is motivated by an inexact super-linear convergence rate analysis. The R-linear convergence of MLMSD is proved for a strictly convex quadratic minimization problem. Numerical tests are presented to show that our algorithm is efficient and robust.


2020 ◽  
Vol 157 ◽  
pp. 04019
Author(s):  
Ella Okolelova ◽  
Marina Shibaeva ◽  
Alexey Efimiev ◽  
Victoria Kolesnikova

The article discusses methods of unconditional optimization to solve the problem of choosing the most effective energy-saving technology in construction. The optimization condition has chosen the value of the rate of reduction of energy consumption during operation of the facility. The task of determining the most effective energy-saving technology is to evaluate how quickly the reduction of consumption of the i-type of energy occurs. For the solution, unconditional optimization methods were used: the steepest descent method and the gradient method. An algorithm has been developed to search for the minimum value of the function when solving the problem using the coordinate-wise descent method. The article presents an algorithm for determining the unconditional minimum using the Nelder-Mead method, which is not a gradient method of spatial search for the optimal solution. The methods considered are classic optimization methods. If there is a difficulty in finding a function on which the functional reaches its minimum, then these methods may not be effective in terms of convergence. In many problems, in particular, when sufficiently complex functions with a large number of parameters are used, it is most advisable to use methods that have a high convergence rate. Such methods are methods for finding the extremum of a function when moving along a gradient, i.e. gradient descent. The task of finding the minimum function of energy consumption is defined as the task of determining the anti-gradient of the objective function, i.e. function decreases in the opposite direction to the gradient. The direction of the anti-gradient is the direction of the steepest descent.


2021 ◽  
Vol 7 (1) ◽  
pp. 1-11
Author(s):  
Noureddine Rahali ◽  
Mohammed Belloufi ◽  
Rachid Benzine

AbstractAn accelerated of the steepest descent method for solving unconstrained optimization problems is presented. which propose a fundamentally different conjugate gradient method, in which the well-known parameter βk is computed by an new formula. Under common assumptions, by using a modified Wolfe line search, descent property and global convergence results were established for the new method. Experimental results provide evidence that our proposed method is in general superior to the classical steepest descent method and has a potential to significantly enhance the computational efficiency and robustness of the training process.


2021 ◽  
Vol 1 (1) ◽  
pp. 20-31
Author(s):  
Dana Taha Mohammed Salih ◽  
Bawar Mohammed Faraj

The Steepest descent method and the Conjugate gradient method to minimize nonlinear functions have been studied in this work. Algorithms are presented and implemented in Matlab software for both methods. However, a comparison has been made between the Steepest descent method and the Conjugate gradient method. The obtained results in Matlab software has time and efficiency aspects. It is shown that the Conjugate gradient method needs fewer iterations and has more efficiency than the Steepest descent method. On the other hand, the Steepest descent method converges a function in less time than the Conjugate gradient method.


Sign in / Sign up

Export Citation Format

Share Document