scholarly journals Exact linesearch limited-memory quasi-Newton methods for minimizing a quadratic function

Author(s):  
David Ek ◽  
Anders Forsgren

AbstractThe main focus in this paper is exact linesearch methods for minimizing a quadratic function whose Hessian is positive definite. We give a class of limited-memory quasi-Newton Hessian approximations which generate search directions parallel to those of the BFGS method, or equivalently, to those of the method of preconditioned conjugate gradients. In the setting of reduced Hessians, the class provides a dynamical framework for the construction of limited-memory quasi-Newton methods. These methods attain finite termination on quadratic optimization problems in exact arithmetic. We show performance of the methods within this framework in finite precision arithmetic by numerical simulations on sequences of related systems of linear equations, which originate from the CUTEst test collection. In addition, we give a compact representation of the Hessian approximations in the full Broyden class for the general unconstrained optimization problem. This representation consists of explicit matrices and gradients only as vector components.

Author(s):  
Basim A. Hassan ◽  
Ranen M. Sulaiman

<span id="docs-internal-guid-a04d8b24-7fff-eaad-9449-fe4b2527904b"><span>Quasi-Newton method is an efficient method for solving unconstrained optimization problems. Self-scaling is one of the common approaches in the modification of the quasi-Newton method. A large variety of self-scaling of quasi-Newton methods is very well known. In this paper, based on quadratic function we derive the new self-scaling of quasi-Newton method and study the convergence property. Numerical results on the collection of problems showed the self-scaling of quasi-Newton methods which improves overall numerical performance for BFGS method.</span></span>


2014 ◽  
Vol 2014 ◽  
pp. 1-6
Author(s):  
Zhijun Luo ◽  
Lirong Wang

A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.


2015 ◽  
Vol 25 (3) ◽  
pp. 1660-1685 ◽  
Author(s):  
Wen Huang ◽  
K. A. Gallivan ◽  
P.-A. Absil

1974 ◽  
Vol 7 (3) ◽  
pp. 311-322 ◽  
Author(s):  
H. Schmitter ◽  
E. Straub

Abstract and IntroductionQuadratic programming means maximizing or minimizing a quadratic function of one or more variables subject to linear restrictions i.e. linear equations and/or inequalities.Among the numerous insurance problems which can be formulated as quadratic programs we shall only discuss four, namely the Credibility, Retention, IBNR and the Cost Distribution problems.Generally, there is no explicite solution to quadratic optimization problems, only statements about the existence of a solution can be made or some algorithm may be recommended in order to get exact or approximate numerical solutions. Restricting ourselves to typical problems of the above mentioned type, however, enables us to give an explicit solution in terms of general formulae for quite a number of cases, such as the onedimensional credibility problem, the retention problem and—under relatively week assumptions— for the IBNR-problem.The results given here are by no means new. The only goal of this paper is to describe a few fundamental insurance problems from a common mathematical standpoint, namely that of quadratic programming and at the same time, to draw attention to a few special aspects and open questions in this field.


2014 ◽  
Vol 26 (5) ◽  
pp. 566-572 ◽  
Author(s):  
Ailan Liu ◽  
◽  
Dingguo Pu ◽  
◽  

<div class=""abs_img""><img src=""[disp_template_path]/JRM/abst-image/00260005/04.jpg"" width=""300"" />Algorithm flow chart</div> We propose a nonmonotone QP-free infeasible method for inequality-constrained nonlinear optimization problems based on a 3-1 piecewise linear NCP function. This nonmonotone QP-free infeasible method is iterative and is based on nonsmooth reformulation of KKT first-order optimality conditions. It does not use a penalty function or a filter in nonmonotone line searches. This algorithm solves only two systems of linear equations with the same nonsingular coefficient matrix, and is implementable and globally convergent without a linear independence constraint qualification or a strict complementarity condition. Preliminary numerical results are presented. </span>


Sign in / Sign up

Export Citation Format

Share Document