On the Convergence Rate of Stochastic Gradient Descent for Strongly Convex Functions

2016 ◽  
Vol 64 (2) ◽  
pp. 141-145
Author(s):  
Md Rajib Arefin ◽  
M Asadujjaman

This paper deals with minimizing average of loss functions using Gradient Descent (GD) and Stochastic Gradient Descent (SGD). We present these two algorithms for minimizing average of a large number of smooth convex functions. We provide some discussions on their complexity analysis, also illustrate the algorithms geometrically. At the end, we compare their performance through numerical experiments. Dhaka Univ. J. Sci. 64(2): 141-145, 2016 (July)


2020 ◽  
Vol 10 (2) ◽  
pp. 124-151
Author(s):  
Justin Sirignano ◽  
Konstantinos Spiliopoulos

Stochastic gradient descent in continuous time (SGDCT) provides a computationally efficient method for the statistical learning of continuous-time models, which are widely used in science, engineering, and finance. The SGDCT algorithm follows a (noisy) descent direction along a continuous stream of data. The parameter updates occur in continuous time and satisfy a stochastic differential equation. This paper analyzes the asymptotic convergence rate of the SGDCT algorithm by proving a central limit theorem for strongly convex objective functions and, under slightly stronger conditions, for nonconvex objective functions as well. An [Formula: see text] convergence rate is also proven for the algorithm in the strongly convex case. The mathematical analysis lies at the intersection of stochastic analysis and statistical learning.


Sign in / Sign up

Export Citation Format

Share Document