scholarly journals A Decentralized Second-Order Method with Exact Linear Convergence Rate for Consensus Optimization

Author(s):  
Aryan Mokhtari ◽  
Wei Shi ◽  
Qing Ling ◽  
Alejandro Ribeiro

2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Shijie Sun ◽  
Meiling Feng ◽  
Luoyi Shi

Abstract This paper considers an iterative algorithm of solving the multiple-sets split equality problem (MSSEP) whose step size is independent of the norm of the related operators, and investigates its sublinear and linear convergence rate. In particular, we present a notion of bounded Hölder regularity property for the MSSEP, which is a generalization of the well-known concept of bounded linear regularity property, and give several sufficient conditions to ensure it. Then we use this property to conclude the sublinear and linear convergence rate of the algorithm. In the end, some numerical experiments are provided to verify the validity of our consequences.





Author(s):  
Ran Gu ◽  
Qiang Du

Abstract How to choose the step size of gradient descent method has been a popular subject of research. In this paper we propose a modified limited memory steepest descent method (MLMSD). In each iteration we propose a selection rule to pick a unique step size from a candidate set, which is calculated by Fletcher’s limited memory steepest descent method (LMSD), instead of going through all the step sizes in a sweep, as in Fletcher’s original LMSD algorithm. MLMSD is motivated by an inexact super-linear convergence rate analysis. The R-linear convergence of MLMSD is proved for a strictly convex quadratic minimization problem. Numerical tests are presented to show that our algorithm is efficient and robust.



2015 ◽  
Vol 48 (4) ◽  
pp. 1510-1522 ◽  
Author(s):  
Jorge López ◽  
José R. Dorronsoro


Author(s):  
Hongchang Gao ◽  
Heng Huang

Sparse learning models have shown promising performance in the high dimensional machine learning applications. The main challenge of sparse learning models is how to optimize it efficiently. Most existing methods solve this problem by relaxing it as a convex problem, incurring large estimation bias. Thus, the sparse learning model with nonconvex constraint has attracted much attention due to its better performance. But it is difficult to optimize due to the non-convexity. In this paper, we propose a linearly convergent stochastic second-order method to optimize this nonconvex problem for large-scale datasets. The proposed method incorporates second-order information to improve the convergence speed. Theoretical analysis shows that our proposed method enjoys linear convergence rate and guarantees to converge to the underlying true model parameter. Experimental results have verified the efficiency and correctness of our proposed method.



Sign in / Sign up

Export Citation Format

Share Document