A zeroth order method for stochastic weakly convex optimization
Keyword(s):
AbstractIn this paper, we consider stochastic weakly convex optimization problems, however without the existence of a stochastic subgradient oracle. We present a derivative free algorithm that uses a two point approximation for computing a gradient estimate of the smoothed function. We prove convergence at a similar rate as state of the art methods, however with a larger constant, and report some numerical results showing the effectiveness of the approach.
2013 ◽
Vol 147
(1-2)
◽
pp. 25-46
◽
2018 ◽
Vol 28
(4)
◽
pp. 3229-3259
◽