nonconvex and nonsmooth optimization
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 6)

H-INDEX

4
(FIVE YEARS 1)

Author(s):  
Yi Zhou ◽  
Zhe Wang ◽  
Kaiyi Ji ◽  
Yingbin Liang ◽  
Vahid Tarokh

Various types of parameter restart schemes have been proposed for proximal gradient algorithm with momentum to facilitate their convergence in convex optimization. However, under parameter restart, the convergence of proximal gradient algorithm with momentum remains obscure in nonconvex optimization. In this paper, we propose a novel proximal gradient algorithm with momentum and parameter restart for solving nonconvex and nonsmooth problems. Our algorithm is designed to 1) allow for adopting flexible parameter restart schemes that cover many existing ones; 2) have a global sub-linear convergence rate in nonconvex and nonsmooth optimization; and 3) have guaranteed convergence to a critical point and have various types of asymptotic convergence rates depending on the parameterization of local geometry in nonconvex and nonsmooth optimization. Numerical experiments demonstrate the convergence and effectiveness of our proposed algorithm.


Author(s):  
Ehsan Kazemi ◽  
Liqiang Wang

Nonconvex and nonsmooth problems have recently attracted considerable attention in machine learning. However, developing efficient methods for the nonconvex and nonsmooth optimization problems with certain performance guarantee remains a challenge. Proximal coordinate descent (PCD) has been widely used for solving optimization problems, but the knowledge of PCD methods in the nonconvex setting is very limited. On the other hand, the asynchronous proximal coordinate descent (APCD) recently have received much attention in order to solve large-scale problems. However, the accelerated variants of APCD algorithms are rarely studied. In this paper, we extend APCD method to the accelerated algorithm (AAPCD) for nonsmooth and nonconvex problems that satisfies the sufficient descent property, by comparing between the function values at proximal update and a linear extrapolated point using a delay-aware momentum value. To the best of our knowledge, we are the first to provide stochastic and deterministic accelerated extension of APCD algorithms for general nonconvex and nonsmooth problems ensuring that for both bounded delays and unbounded delays every limit point is a critical point. By leveraging Kurdyka-Łojasiewicz property, we will show linear and sublinear convergence rates for the deterministic AAPCD with bounded delays. Numerical results demonstrate the practical efficiency of our algorithm in speed.


2018 ◽  
Vol 1 (1) ◽  
pp. 1-10
Author(s):  
S. Dempe ◽  
G. Luo ◽  
S. Franke

In this paper, we investigate the pessimistic bilevel linear optimization problem (PBLOP). Based on the lower level optimal value function and duality, the PBLOP can be transformed to a single-level while nonconvex and nonsmooth optimization problem. By use of linear optimization duality, we obtain a tractable and equivalent transformation and propose algorithms for computing global or local optimal solutions. One small example is presented to illustrate the feasibility of the method.  


Sign in / Sign up

Export Citation Format

Share Document