scholarly journals Stochastic Gradient Descent on a Tree: an Adaptive and Robust Approach to Stochastic Convex Optimization

Author(s):  
Sattar Vakili ◽  
Sudeep Salgia ◽  
Qing Zhao
Author(s):  
Yi Xu ◽  
Zhuoning Yuan ◽  
Sen Yang ◽  
Rong Jin ◽  
Tianbao Yang

Extrapolation is a well-known technique for solving convex optimization and variational inequalities and recently attracts some attention for non-convex optimization. Several recent works have empirically shown its success in some machine learning tasks. However, it has not been analyzed for non-convex minimization and there still remains a gap between the theory and the practice. In this paper, we analyze gradient descent  and stochastic gradient descent with extrapolation for finding an approximate first-order stationary point in smooth non-convex optimization problems. Our convergence upper bounds show that the algorithms with extrapolation can be accelerated than without extrapolation.


Sign in / Sign up

Export Citation Format

Share Document