Vanishing Price of Decentralization in Large Coordinative Nonconvex Optimization

2017 ◽  
Vol 27 (3) ◽  
pp. 1977-2009 ◽  
Author(s):  
Mengdi Wang
2015 ◽  
Vol 2015 ◽  
pp. 1-8
Author(s):  
Yunlong Lu ◽  
Weiwei Yang ◽  
Wenyu Li ◽  
Xiaowei Jiang ◽  
Yueting Yang

A new trust region method is presented, which combines nonmonotone line search technique, a self-adaptive update rule for the trust region radius, and the weighting technique for the ratio between the actual reduction and the predicted reduction. Under reasonable assumptions, the global convergence of the method is established for unconstrained nonconvex optimization. Numerical results show that the new method is efficient and robust for solving unconstrained optimization problems.


2021 ◽  
Author(s):  
Tianyi Liu ◽  
Zhehui Chen ◽  
Enlu Zhou ◽  
Tuo Zhao

Momentum stochastic gradient descent (MSGD) algorithm has been widely applied to many nonconvex optimization problems in machine learning (e.g., training deep neural networks, variational Bayesian inference, etc.). Despite its empirical success, there is still a lack of theoretical understanding of convergence properties of MSGD. To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima. Our study shows that the momentum helps escape from saddle points but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing). Our theoretical discovery partially corroborates the empirical success of MSGD in training deep neural networks.


Sign in / Sign up

Export Citation Format

Share Document