variable sample size
Recently Published Documents


TOTAL DOCUMENTS

88
(FIVE YEARS 15)

H-INDEX

15
(FIVE YEARS 1)

Author(s):  
Wai Chung Yeong ◽  
Yen Yoon Tan ◽  
Sok Li Lim ◽  
Khai Wah Khaw ◽  
Michael Boon Chong Khoo

Author(s):  
Afrooz Jalilzadeh ◽  
Angelia Nedić ◽  
Uday V. Shanbhag ◽  
Farzad Yousefian

Classical theory for quasi-Newton schemes has focused on smooth, deterministic, unconstrained optimization, whereas recent forays into stochastic convex optimization have largely resided in smooth, unconstrained, and strongly convex regimes. Naturally, there is a compelling need to address nonsmoothness, the lack of strong convexity, and the presence of constraints. Accordingly, this paper presents a quasi-Newton framework that can process merely convex and possibly nonsmooth (but smoothable) stochastic convex problems. We propose a framework that combines iterative smoothing and regularization with a variance-reduced scheme reliant on using an increasing sample size of gradients. We make the following contributions. (i) We develop a regularized and smoothed variable sample-size BFGS update (rsL-BFGS) that generates a sequence of Hessian approximations and can accommodate nonsmooth convex objectives by utilizing iterative regularization and smoothing. (ii) In strongly convex regimes with state-dependent noise, the proposed variable sample-size stochastic quasi-Newton (VS-SQN) scheme admits a nonasymptotic linear rate of convergence, whereas the oracle complexity of computing an [Formula: see text]-solution is [Formula: see text], where [Formula: see text] denotes the condition number and [Formula: see text]. In nonsmooth (but smoothable) regimes, using Moreau smoothing retains the linear convergence rate for the resulting smoothed VS-SQN (or sVS-SQN) scheme. Notably, the nonsmooth regime allows for accommodating convex constraints. To contend with the possible unavailability of Lipschitzian and strong convexity parameters, we also provide sublinear rates for diminishing step-length variants that do not rely on the knowledge of such parameters. (iii) In merely convex but smooth settings, the regularized VS-SQN scheme rVS-SQN displays a rate of [Formula: see text] with an oracle complexity of [Formula: see text]. When the smoothness requirements are weakened, the rate for the regularized and smoothed VS-SQN scheme rsVS-SQN worsens to [Formula: see text]. Such statements allow for a state-dependent noise assumption under a quadratic growth property on the objective. To the best of our knowledge, the rate results are among the first available rates for QN methods in nonsmooth regimes. Preliminary numerical evidence suggests that the schemes compare well with accelerated gradient counterparts on selected problems in stochastic optimization and machine learning with significant benefits in ill-conditioned regimes.


2020 ◽  
Vol 88 ◽  
pp. 106041 ◽  
Author(s):  
Xiao-han Wang ◽  
Yong Zhang ◽  
Xiao-yan Sun ◽  
Yong-li Wang ◽  
Chang-he Du

Sign in / Sign up

Export Citation Format

Share Document