strong convexity
Recently Published Documents


TOTAL DOCUMENTS

70
(FIVE YEARS 23)

H-INDEX

8
(FIVE YEARS 3)

Author(s):  
Afrooz Jalilzadeh ◽  
Angelia Nedić ◽  
Uday V. Shanbhag ◽  
Farzad Yousefian

Classical theory for quasi-Newton schemes has focused on smooth, deterministic, unconstrained optimization, whereas recent forays into stochastic convex optimization have largely resided in smooth, unconstrained, and strongly convex regimes. Naturally, there is a compelling need to address nonsmoothness, the lack of strong convexity, and the presence of constraints. Accordingly, this paper presents a quasi-Newton framework that can process merely convex and possibly nonsmooth (but smoothable) stochastic convex problems. We propose a framework that combines iterative smoothing and regularization with a variance-reduced scheme reliant on using an increasing sample size of gradients. We make the following contributions. (i) We develop a regularized and smoothed variable sample-size BFGS update (rsL-BFGS) that generates a sequence of Hessian approximations and can accommodate nonsmooth convex objectives by utilizing iterative regularization and smoothing. (ii) In strongly convex regimes with state-dependent noise, the proposed variable sample-size stochastic quasi-Newton (VS-SQN) scheme admits a nonasymptotic linear rate of convergence, whereas the oracle complexity of computing an [Formula: see text]-solution is [Formula: see text], where [Formula: see text] denotes the condition number and [Formula: see text]. In nonsmooth (but smoothable) regimes, using Moreau smoothing retains the linear convergence rate for the resulting smoothed VS-SQN (or sVS-SQN) scheme. Notably, the nonsmooth regime allows for accommodating convex constraints. To contend with the possible unavailability of Lipschitzian and strong convexity parameters, we also provide sublinear rates for diminishing step-length variants that do not rely on the knowledge of such parameters. (iii) In merely convex but smooth settings, the regularized VS-SQN scheme rVS-SQN displays a rate of [Formula: see text] with an oracle complexity of [Formula: see text]. When the smoothness requirements are weakened, the rate for the regularized and smoothed VS-SQN scheme rsVS-SQN worsens to [Formula: see text]. Such statements allow for a state-dependent noise assumption under a quadratic growth property on the objective. To the best of our knowledge, the rate results are among the first available rates for QN methods in nonsmooth regimes. Preliminary numerical evidence suggests that the schemes compare well with accelerated gradient counterparts on selected problems in stochastic optimization and machine learning with significant benefits in ill-conditioned regimes.


2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Muhammad Adil Khan ◽  
Saeed Anwar ◽  
Sadia Khalid ◽  
Zaid Mohammed Mohammed Mahdi Sayed

By using the Jensen–Mercer inequality for strongly convex functions, we present Hermite–Hadamard–Mercer inequality for strongly convex functions. Furthermore, we also present some new Hermite‐Hadamard‐Mercer-type inequalities for differentiable functions whose derivatives in absolute value are convex.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Hengxiao Qi ◽  
Waqas Nazeer ◽  
Sami Ullah Zakir ◽  
Kamsing Nonlaopon

In the present research, we generalize the midpoint inequalities for strongly convex functions in weighted fractional integral settings. Our results generalize many existing results and can be considered as extension of existing results.


Author(s):  
Jakub Wiktor Both

AbstractIn this paper, the convergence of the fundamental alternating minimization is established for non-smooth non-strongly convex optimization problems in Banach spaces, and novel rates of convergence are provided. As objective function a composition of a smooth, and a block-separable, non-smooth part is considered, covering a large range of applications. For the former, three different relaxations of strong convexity are considered: (i) quasi-strong convexity; (ii) quadratic functional growth; and (iii) plain convexity. With new and improved rates benefiting from both separate steps of the scheme, linear convergence is proved for (i) and (ii), whereas sublinear convergence is showed for (iii).


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nazarii Tupitsa ◽  
Pavel Dvurechensky ◽  
Alexander Gasnikov ◽  
Sergey Guminov

Abstract We consider alternating minimization procedures for convex and non-convex optimization problems with the vector of variables divided into several blocks, each block being amenable for minimization with respect to its variables while maintaining other variables blocks constant. In the case of two blocks, we prove a linear convergence rate for an alternating minimization procedure under the Polyak–Łojasiewicz (PL) condition, which can be seen as a relaxation of the strong convexity assumption. Under the strong convexity assumption in the many-blocks setting, we provide an accelerated alternating minimization procedure with linear convergence rate depending on the square root of the condition number as opposed to just the condition number for the non-accelerated method. We also consider the problem of finding an approximate non-negative solution to a linear system of equations A ⁢ x = y {Ax=y} with alternating minimization of Kullback–Leibler (KL) divergence between Ax and y.


Sign in / Sign up

Export Citation Format

Share Document