scholarly journals On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions

2016 ◽  
Vol 11 (7) ◽  
pp. 1185-1199 ◽  
Author(s):  
Etienne de Klerk ◽  
François Glineur ◽  
Adrien B. Taylor
Author(s):  
Bin Shi ◽  
Simon S. Du ◽  
Michael I. Jordan ◽  
Weijie J. Su

AbstractGradient-based optimization algorithms can be studied from the perspective of limiting ordinary differential equations (ODEs). Motivated by the fact that existing ODEs do not distinguish between two fundamentally different algorithms—Nesterov’s accelerated gradient method for strongly convex functions (NAG-) and Polyak’s heavy-ball method—we study an alternative limiting process that yields high-resolution ODEs. We show that these ODEs permit a general Lyapunov function framework for the analysis of convergence in both continuous and discrete time. We also show that these ODEs are more accurate surrogates for the underlying algorithms; in particular, they not only distinguish between NAG- and Polyak’s heavy-ball method, but they allow the identification of a term that we refer to as “gradient correction” that is present in NAG- but not in the heavy-ball method and is responsible for the qualitative difference in convergence of the two methods. We also use the high-resolution ODE framework to study Nesterov’s accelerated gradient method for (non-strongly) convex functions, uncovering a hitherto unknown result—that NAG- minimizes the squared gradient norm at an inverse cubic rate. Finally, by modifying the high-resolution ODE of NAG-, we obtain a family of new optimization methods that are shown to maintain the accelerated convergence rates of NAG- for smooth convex functions.


Author(s):  
Young Jae Sim ◽  
Adam Lecko ◽  
Derek K. Thomas

AbstractLet f be analytic in the unit disk $${\mathbb {D}}=\{z\in {\mathbb {C}}:|z|<1 \}$$ D = { z ∈ C : | z | < 1 } , and $${{\mathcal {S}}}$$ S be the subclass of normalized univalent functions given by $$f(z)=z+\sum _{n=2}^{\infty }a_n z^n$$ f ( z ) = z + ∑ n = 2 ∞ a n z n for $$z\in {\mathbb {D}}$$ z ∈ D . We give sharp bounds for the modulus of the second Hankel determinant $$ H_2(2)(f)=a_2a_4-a_3^2$$ H 2 ( 2 ) ( f ) = a 2 a 4 - a 3 2 for the subclass $$ {\mathcal F_{O}}(\lambda ,\beta )$$ F O ( λ , β ) of strongly Ozaki close-to-convex functions, where $$1/2\le \lambda \le 1$$ 1 / 2 ≤ λ ≤ 1 , and $$0<\beta \le 1$$ 0 < β ≤ 1 . Sharp bounds are also given for $$|H_2(2)(f^{-1})|$$ | H 2 ( 2 ) ( f - 1 ) | , where $$f^{-1}$$ f - 1 is the inverse function of f. The results settle an invariance property of $$|H_2(2)(f)|$$ | H 2 ( 2 ) ( f ) | and $$|H_2(2)(f^{-1})|$$ | H 2 ( 2 ) ( f - 1 ) | for strongly convex functions.


2017 ◽  
Vol 54 (2) ◽  
pp. 221-240 ◽  
Author(s):  
Muhammad Aslam Noor ◽  
Gabriela Cristescu ◽  
Muhammad Uzair Awan

Author(s):  
Nur Syarafina Mohamed ◽  
Mustafa Mamat ◽  
Mohd Rivaie ◽  
Shazlyn Milleana Shaharudin

One of the popular approaches in modifying the Conjugate Gradient (CG) Method is hybridization. In this paper, a new hybrid CG is introduced and its performance is compared to the classical CG method which are Rivaie-Mustafa-Ismail-Leong (RMIL) and Syarafina-Mustafa-Rivaie (SMR) methods. The proposed hybrid CG is evaluated as a convex combination of RMIL and SMR method. Their performance are analyzed under the exact line search. The comparison performance showed that the hybrid CG is promising and has outperformed the classical CG of RMIL and SMR in terms of the number of iterations and central processing unit per time.


2015 ◽  
Vol 10 (4) ◽  
pp. 699-708 ◽  
Author(s):  
M. Dodangeh ◽  
L. N. Vicente ◽  
Z. Zhang

Sign in / Sign up

Export Citation Format

Share Document