scholarly journals A Distributed Conjugate Gradient Online Learning Method over Networks

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Cuixia Xu ◽  
Junlong Zhu ◽  
Youlin Shang ◽  
Qingtao Wu

In a distributed online optimization problem with a convex constrained set over an undirected multiagent network, the local objective functions are convex and vary over time. Most of the existing methods used to solve this problem are based on the fastest gradient descent method. However, the convergence speed of these methods is decreased with an increase in the number of iterations. To accelerate the convergence speed of the algorithm, we present a distributed online conjugate gradient algorithm, different from a gradient method, in which the search directions are a set of vectors that are conjugated to each other and the step sizes are obtained through an accurate line search. We analyzed the convergence of the algorithm theoretically and obtained a regret bound of OT, where T is the number of iterations. Finally, numerical experiments conducted on a sensor network demonstrate the performance of the proposed algorithm.

2018 ◽  
Vol 98 (2) ◽  
pp. 331-338 ◽  
Author(s):  
STEFAN PANIĆ ◽  
MILENA J. PETROVIĆ ◽  
MIROSLAVA MIHAJLOV CAREVIĆ

We improve the convergence properties of the iterative scheme for solving unconstrained optimisation problems introduced in Petrovic et al. [‘Hybridization of accelerated gradient descent method’, Numer. Algorithms (2017), doi:10.1007/s11075-017-0460-4] by optimising the value of the initial step length parameter in the backtracking line search procedure. We prove the validity of the algorithm and illustrate its advantages by numerical experiments and comparisons.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Shengwei Yao ◽  
Yuping Wu ◽  
Jielan Yang ◽  
Jieqiong Xu

We proposed a three-term gradient descent method that can be well applied to address the optimization problems in this article. The search direction of the obtained method is generated in a specific subspace. Specifically, a quadratic approximation model is applied in the process of generating the search direction. In order to reduce the amount of calculation and make the best use of the existing information, the subspace was made up of the gradient of the current and prior iteration point and the previous search direction. By using the subspace-based optimization technology, the global convergence result is established under Wolfe line search. The results of numerical experiments show that the new method is effective and robust.


Filomat ◽  
2009 ◽  
Vol 23 (3) ◽  
pp. 23-36 ◽  
Author(s):  
Predrag Stanimirovic ◽  
Marko Miladinovic ◽  
Snezana Djordjevic

We introduced an algorithm for unconstrained optimization based on the reduction of the modified Newton method with line search into a gradient descent method. Main idea used in the algorithm construction is approximation of Hessian by a diagonal matrix. The step length calculation algorithm is based on the Taylor's development in two successive iterative points and the backtracking line search procedure.


Sign in / Sign up

Export Citation Format

Share Document