scholarly journals One-dimensional system arising in stochastic gradient descent

2021 ◽  
Vol 53 (2) ◽  
pp. 575-607
Author(s):  
Konstantinos Karatapanis

AbstractWe consider stochastic differential equations of the form $dX_t = |f(X_t)|/t^{\gamma} dt+1/t^{\gamma} dB_t$, where f(x) behaves comparably to $|x|^k$ in a neighborhood of the origin, for $k\in [1,\infty)$. We show that there exists a threshold value $ \,{:}\,{\raise-1.5pt{=}}\, \tilde{\gamma}$ for $\gamma$, depending on k, such that if $\gamma \in (1/2, \tilde{\gamma})$, then $\mathbb{P}(X_t\rightarrow 0) = 0$, and for the rest of the permissible values of $\gamma$, $\mathbb{P}(X_t\rightarrow 0)>0$. These results extend to discrete processes that satisfy $X_{n+1}-X_n = f(X_n)/n^\gamma +Y_n/n^\gamma$. Here, $Y_{n+1}$ are martingale differences that are almost surely bounded.This result shows that for a function F whose second derivative at degenerate saddle points is of polynomial order, it is always possible to escape saddle points via the iteration $X_{n+1}-X_n =F'(X_n)/n^\gamma +Y_n/n^\gamma$ for a suitable choice of $\gamma$.

2021 ◽  
Author(s):  
Tianyi Liu ◽  
Zhehui Chen ◽  
Enlu Zhou ◽  
Tuo Zhao

Momentum stochastic gradient descent (MSGD) algorithm has been widely applied to many nonconvex optimization problems in machine learning (e.g., training deep neural networks, variational Bayesian inference, etc.). Despite its empirical success, there is still a lack of theoretical understanding of convergence properties of MSGD. To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima. Our study shows that the momentum helps escape from saddle points but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing). Our theoretical discovery partially corroborates the empirical success of MSGD in training deep neural networks.


2019 ◽  
Vol 25 (1) ◽  
pp. 37-60
Author(s):  
Antoon Pelsser ◽  
Kossi Gnameho

Abstract Backward stochastic differential equations (BSDEs) appear in many problems in stochastic optimal control theory, mathematical finance, insurance and economics. This work deals with the numerical approximation of the class of Markovian BSDEs where the terminal condition is a functional of a Brownian motion. Using Hermite martingales, we show that the problem of solving a BSDE is identical to solving a countable infinite-dimensional system of ordinary differential equations (ODEs). The family of ODEs belongs to the class of stiff ODEs, where the associated functional is one-sided Lipschitz. On this basis, we derive a numerical scheme and provide numerical applications.


Sign in / Sign up

Export Citation Format

Share Document