oracle complexity
Recently Published Documents


TOTAL DOCUMENTS

14
(FIVE YEARS 6)

H-INDEX

4
(FIVE YEARS 0)

Author(s):  
Renbo Zhao

We develop stochastic first-order primal-dual algorithms to solve a class of convex-concave saddle-point problems. When the saddle function is strongly convex in the primal variable, we develop the first stochastic restart scheme for this problem. When the gradient noises obey sub-Gaussian distributions, the oracle complexity of our restart scheme is strictly better than any of the existing methods, even in the deterministic case. Furthermore, for each problem parameter of interest, whenever the lower bound exists, the oracle complexity of our restart scheme is either optimal or nearly optimal (up to a log factor). The subroutine used in this scheme is itself a new stochastic algorithm developed for the problem where the saddle function is nonstrongly convex in the primal variable. This new algorithm, which is based on the primal-dual hybrid gradient framework, achieves the state-of-the-art oracle complexity and may be of independent interest.


Author(s):  
Afrooz Jalilzadeh ◽  
Angelia Nedić ◽  
Uday V. Shanbhag ◽  
Farzad Yousefian

Classical theory for quasi-Newton schemes has focused on smooth, deterministic, unconstrained optimization, whereas recent forays into stochastic convex optimization have largely resided in smooth, unconstrained, and strongly convex regimes. Naturally, there is a compelling need to address nonsmoothness, the lack of strong convexity, and the presence of constraints. Accordingly, this paper presents a quasi-Newton framework that can process merely convex and possibly nonsmooth (but smoothable) stochastic convex problems. We propose a framework that combines iterative smoothing and regularization with a variance-reduced scheme reliant on using an increasing sample size of gradients. We make the following contributions. (i) We develop a regularized and smoothed variable sample-size BFGS update (rsL-BFGS) that generates a sequence of Hessian approximations and can accommodate nonsmooth convex objectives by utilizing iterative regularization and smoothing. (ii) In strongly convex regimes with state-dependent noise, the proposed variable sample-size stochastic quasi-Newton (VS-SQN) scheme admits a nonasymptotic linear rate of convergence, whereas the oracle complexity of computing an [Formula: see text]-solution is [Formula: see text], where [Formula: see text] denotes the condition number and [Formula: see text]. In nonsmooth (but smoothable) regimes, using Moreau smoothing retains the linear convergence rate for the resulting smoothed VS-SQN (or sVS-SQN) scheme. Notably, the nonsmooth regime allows for accommodating convex constraints. To contend with the possible unavailability of Lipschitzian and strong convexity parameters, we also provide sublinear rates for diminishing step-length variants that do not rely on the knowledge of such parameters. (iii) In merely convex but smooth settings, the regularized VS-SQN scheme rVS-SQN displays a rate of [Formula: see text] with an oracle complexity of [Formula: see text]. When the smoothness requirements are weakened, the rate for the regularized and smoothed VS-SQN scheme rsVS-SQN worsens to [Formula: see text]. Such statements allow for a state-dependent noise assumption under a quadratic growth property on the objective. To the best of our knowledge, the rate results are among the first available rates for QN methods in nonsmooth regimes. Preliminary numerical evidence suggests that the schemes compare well with accelerated gradient counterparts on selected problems in stochastic optimization and machine learning with significant benefits in ill-conditioned regimes.


Author(s):  
Dan Garber

We revisit the problem of online linear optimization in the case where the set of feasible actions is accessible through an approximated linear optimization oracle with a factor α multiplicative approximation guarantee. This setting in particular is interesting because it captures natural online extensions of well-studied offline linear optimization problems that are NP-hard yet admit efficient approximation algorithms. The goal here is to minimize the α-regret, which is the natural extension to this setting of the standard regret in online learning. We present new algorithms with significantly improved oracle complexity for both the full-information and bandit variants of the problem. Mainly, for both variants, we present α-regret bounds of [Formula: see text], were T is the number of prediction rounds, using only [Formula: see text] calls to the approximation oracle per iteration, on average. These are the first results to obtain both the average oracle complexity of [Formula: see text] (or even polylogarithmic in T) and α -regret bound [Formula: see text] for a constant c > 0 for both variants.


Author(s):  
Yuanyuan Liu ◽  
Fanhua Shang ◽  
Licheng Jiao

Recently, research on variance reduced incremental gradient descent methods (e.g., SAGA) has made exciting progress (e.g., linear convergence for strongly convex (SC) problems). However, existing accelerated methods (e.g., point-SAGA) suffer from drawbacks such as inflexibility. In this paper, we design a novel and simple momentum to accelerate the classical SAGA algorithm, and propose a direct accelerated incremental gradient descent algorithm. In particular, our theoretical result shows that our algorithm attains a best known oracle complexity for strongly convex problems and an improved convergence rate for the case of n>=L/\mu. We also give experimental results justifying our theoretical results and showing the effectiveness of our algorithm.


2019 ◽  
Vol 19 (7&8) ◽  
pp. 555-574
Author(s):  
Abhijith J. ◽  
Apoorva Patel

The question of whether quantum spatial search in two dimensions can be made optimal has long been an open problem. We report progress towards its resolution by showing that the oracle complexity for target location can be made optimal, by increasing the number of calls to the walk operator that incorporates the graph structure by a logarithmic factor. Our algorithm does not require amplitude amplification. An important ingredient of our algorithm is the implementation of multi-step quantum walks by graph powering, using a coin space of walk-length dependent dimension, which may be of independent interest. Finally, we demonstrate how to implement quantum walks arising from powers of symmetric Markov chains using our methods.


2018 ◽  
Vol 18 (15&16) ◽  
pp. 1295-1331
Author(s):  
Abhijith J. ◽  
Apoorva Patel

We analyse the eigenvalue and eigenvector structure of the flip-flop quantum walk on regular graphs, explicitly demonstrating how it is quadratically faster than the classical random walk. Then we use it in a controlled spatial search algorithm with multiple target states, and determine the oracle complexity as a function of the spectral gap and the number of target states. The oracle complexity is optimal as a function of the graph size and the number of target states, when the spectral gap of the adjacency matrix is $\Theta(1)$. It is also optimal for spatial search on D>4 dimensional hypercubic lattices. Otherwise it matches the best result available in the literature, with a much simpler algorithm. Our results also yield bounds on the classical hitting time of random walks on regular graphs, which may be of independent interest.


2012 ◽  
Vol 58 (5) ◽  
pp. 3235-3249 ◽  
Author(s):  
Alekh Agarwal ◽  
Peter L. Bartlett ◽  
Pradeep Ravikumar ◽  
Martin J. Wainwright

Sign in / Sign up

Export Citation Format

Share Document