scholarly journals Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart for Nonconvex Optimization

Author(s):  
Yi Zhou ◽  
Zhe Wang ◽  
Kaiyi Ji ◽  
Yingbin Liang ◽  
Vahid Tarokh

Various types of parameter restart schemes have been proposed for proximal gradient algorithm with momentum to facilitate their convergence in convex optimization. However, under parameter restart, the convergence of proximal gradient algorithm with momentum remains obscure in nonconvex optimization. In this paper, we propose a novel proximal gradient algorithm with momentum and parameter restart for solving nonconvex and nonsmooth problems. Our algorithm is designed to 1) allow for adopting flexible parameter restart schemes that cover many existing ones; 2) have a global sub-linear convergence rate in nonconvex and nonsmooth optimization; and 3) have guaranteed convergence to a critical point and have various types of asymptotic convergence rates depending on the parameterization of local geometry in nonconvex and nonsmooth optimization. Numerical experiments demonstrate the convergence and effectiveness of our proposed algorithm.

Author(s):  
Ehsan Kazemi ◽  
Liqiang Wang

Nonconvex and nonsmooth problems have recently attracted considerable attention in machine learning. However, developing efficient methods for the nonconvex and nonsmooth optimization problems with certain performance guarantee remains a challenge. Proximal coordinate descent (PCD) has been widely used for solving optimization problems, but the knowledge of PCD methods in the nonconvex setting is very limited. On the other hand, the asynchronous proximal coordinate descent (APCD) recently have received much attention in order to solve large-scale problems. However, the accelerated variants of APCD algorithms are rarely studied. In this paper, we extend APCD method to the accelerated algorithm (AAPCD) for nonsmooth and nonconvex problems that satisfies the sufficient descent property, by comparing between the function values at proximal update and a linear extrapolated point using a delay-aware momentum value. To the best of our knowledge, we are the first to provide stochastic and deterministic accelerated extension of APCD algorithms for general nonconvex and nonsmooth problems ensuring that for both bounded delays and unbounded delays every limit point is a critical point. By leveraging Kurdyka-Łojasiewicz property, we will show linear and sublinear convergence rates for the deterministic AAPCD with bounded delays. Numerical results demonstrate the practical efficiency of our algorithm in speed.


2019 ◽  
Vol 2019 (1) ◽  
Author(s):  
Shijie Sun ◽  
Meiling Feng ◽  
Luoyi Shi

Abstract This paper considers an iterative algorithm of solving the multiple-sets split equality problem (MSSEP) whose step size is independent of the norm of the related operators, and investigates its sublinear and linear convergence rate. In particular, we present a notion of bounded Hölder regularity property for the MSSEP, which is a generalization of the well-known concept of bounded linear regularity property, and give several sufficient conditions to ensure it. Then we use this property to conclude the sublinear and linear convergence rate of the algorithm. In the end, some numerical experiments are provided to verify the validity of our consequences.


2019 ◽  
Vol 35 (3) ◽  
pp. 371-378
Author(s):  
PORNTIP PROMSINCHAI ◽  
NARIN PETROT ◽  
◽  
◽  

In this paper, we consider convex constrained optimization problems with composite objective functions over the set of a minimizer of another function. The main aim is to test numerically a new algorithm, namely a stochastic block coordinate proximal-gradient algorithm with penalization, by comparing both the number of iterations and CPU times between this introduced algorithm and the other well-known types of block coordinate descent algorithm for finding solutions of the randomly generated optimization problems with regularization term.


Sign in / Sign up

Export Citation Format

Share Document