Deriving collinear scaling algorithms as extensions of quasi-Newton methods and the local convergence of DFP- and BFGS-related collinear scaling algorithms

1990 ◽  
Vol 49 (1-3) ◽  
pp. 23-48 ◽  
Author(s):  
K. A. Ariyawansa
2013 ◽  
Vol 58 (1) ◽  
pp. 225-247 ◽  
Author(s):  
F. J. Aragón Artacho ◽  
A. Belyakov ◽  
A. L. Dontchev ◽  
M. López

1992 ◽  
Vol 56 (1-3) ◽  
pp. 71-89 ◽  
Author(s):  
Chi-Ming Ip ◽  
Jerzy Kyparisis

Author(s):  
Anton Rodomanov ◽  
Yurii Nesterov

AbstractWe study the local convergence of classical quasi-Newton methods for nonlinear optimization. Although it was well established a long time ago that asymptotically these methods converge superlinearly, the corresponding rates of convergence still remain unknown. In this paper, we address this problem. We obtain first explicit non-asymptotic rates of superlinear convergence for the standard quasi-Newton methods, which are based on the updating formulas from the convex Broyden class. In particular, for the well-known DFP and BFGS methods, we obtain the rates of the form $$(\frac{n L^2}{\mu ^2 k})^{k/2}$$ ( n L 2 μ 2 k ) k / 2 and $$(\frac{n L}{\mu k})^{k/2}$$ ( nL μ k ) k / 2 respectively, where k is the iteration counter, n is the dimension of the problem, $$\mu $$ μ is the strong convexity parameter, and L is the Lipschitz constant of the gradient.


2015 ◽  
Vol 25 (3) ◽  
pp. 1660-1685 ◽  
Author(s):  
Wen Huang ◽  
K. A. Gallivan ◽  
P.-A. Absil

1985 ◽  
Vol 47 (4) ◽  
pp. 393-399 ◽  
Author(s):  
F. Biegler-König
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document