broyden class
Recently Published Documents


TOTAL DOCUMENTS

7
(FIVE YEARS 4)

H-INDEX

3
(FIVE YEARS 0)

Author(s):  
David Ek ◽  
Anders Forsgren

AbstractThe main focus in this paper is exact linesearch methods for minimizing a quadratic function whose Hessian is positive definite. We give a class of limited-memory quasi-Newton Hessian approximations which generate search directions parallel to those of the BFGS method, or equivalently, to those of the method of preconditioned conjugate gradients. In the setting of reduced Hessians, the class provides a dynamical framework for the construction of limited-memory quasi-Newton methods. These methods attain finite termination on quadratic optimization problems in exact arithmetic. We show performance of the methods within this framework in finite precision arithmetic by numerical simulations on sequences of related systems of linear equations, which originate from the CUTEst test collection. In addition, we give a compact representation of the Hessian approximations in the full Broyden class for the general unconstrained optimization problem. This representation consists of explicit matrices and gradients only as vector components.


Author(s):  
Anton Rodomanov ◽  
Yurii Nesterov

AbstractWe study the local convergence of classical quasi-Newton methods for nonlinear optimization. Although it was well established a long time ago that asymptotically these methods converge superlinearly, the corresponding rates of convergence still remain unknown. In this paper, we address this problem. We obtain first explicit non-asymptotic rates of superlinear convergence for the standard quasi-Newton methods, which are based on the updating formulas from the convex Broyden class. In particular, for the well-known DFP and BFGS methods, we obtain the rates of the form $$(\frac{n L^2}{\mu ^2 k})^{k/2}$$ ( n L 2 μ 2 k ) k / 2 and $$(\frac{n L}{\mu k})^{k/2}$$ ( nL μ k ) k / 2 respectively, where k is the iteration counter, n is the dimension of the problem, $$\mu $$ μ is the strong convexity parameter, and L is the Lipschitz constant of the gradient.


Author(s):  
Martin Buhmann ◽  
Dirk Siegel

Abstract We consider Broyden class updates for large scale optimization problems in n dimensions, restricting attention to the case when the initial second derivative approximation is the identity matrix. Under this assumption we present an implementation of the Broyden class based on a coordinate transformation on each iteration. It requires only $$2nk + O(k^{2}) + O(n)$$ 2 n k + O ( k 2 ) + O ( n ) multiplications on the kth iteration and stores $$nK+ O(K^2) + O(n)$$ n K + O ( K 2 ) + O ( n ) numbers, where K is the total number of iterations. We investigate a modification of this algorithm by a scaling approach and show a substantial improvement in performance over the BFGS method. We also study several adaptations of the new implementation to the limited memory situation, presenting algorithms that work with a fixed amount of storage independent of the number of iterations. We show that one such algorithm retains the property of quadratic termination. The practical performance of the new methods is compared with the performance of Nocedal’s (Math Comput 35:773--782, 1980) method, which is considered the benchmark in limited memory algorithms. The tests show that the new algorithms can be significantly more efficient than Nocedal’s method. Finally, we show how a scaling technique can significantly improve both Nocedal’s method and the new generalized conjugate gradient algorithm.


2020 ◽  
Vol 77 (2) ◽  
pp. 433-463
Author(s):  
S. Cipolla ◽  
C. Di Fiore ◽  
P. Zellini

2018 ◽  
Vol 25 (5) ◽  
pp. e2186 ◽  
Author(s):  
Omar DeGuchy ◽  
Jennifer B. Erway ◽  
Roummel F. Marcia

2015 ◽  
Vol 25 (3) ◽  
pp. 1660-1685 ◽  
Author(s):  
Wen Huang ◽  
K. A. Gallivan ◽  
P.-A. Absil

Sign in / Sign up

Export Citation Format

Share Document