Subgradient-Based Neural Networks for Nonsmooth Nonconvex Optimization Problems

2009 ◽  
Vol 20 (6) ◽  
pp. 1024-1038 ◽  
Author(s):  
Wei Bian ◽  
Xiaoping Xue
2021 ◽  
Author(s):  
Tianyi Liu ◽  
Zhehui Chen ◽  
Enlu Zhou ◽  
Tuo Zhao

Momentum stochastic gradient descent (MSGD) algorithm has been widely applied to many nonconvex optimization problems in machine learning (e.g., training deep neural networks, variational Bayesian inference, etc.). Despite its empirical success, there is still a lack of theoretical understanding of convergence properties of MSGD. To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima. Our study shows that the momentum helps escape from saddle points but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing). Our theoretical discovery partially corroborates the empirical success of MSGD in training deep neural networks.


Author(s):  
Abdelkrim El Mouatasim ◽  
Rachid Ellaia ◽  
Eduardo de Cursi

Random perturbation of the projected variable metric method for nonsmooth nonconvex optimization problems with linear constraintsWe present a random perturbation of the projected variable metric method for solving linearly constrained nonsmooth (i.e., nondifferentiable) nonconvex optimization problems, and we establish the convergence to a global minimum for a locally Lipschitz continuous objective function which may be nondifferentiable on a countable set of points. Numerical results show the effectiveness of the proposed approach.


2013 ◽  
Vol 2013 ◽  
pp. 1-9
Author(s):  
Yuan Lu ◽  
Wei Wang ◽  
Li-Ping Pang ◽  
Dan Li

A class of constrained nonsmooth nonconvex optimization problems, that is, piecewiseC2objectives with smooth inequality constraints are discussed in this paper. Based on the𝒱𝒰-theory, a superlinear convergent𝒱𝒰-algorithm, which uses a nonconvex redistributed proximal bundle subroutine, is designed to solve these optimization problems. An illustrative example is given to show how this convergent method works on a Second-Order Cone programming problem.


Sign in / Sign up

Export Citation Format

Share Document