Optimization With Discrete Variables Via Recursive Quadratic Programming: Part 2—Algorithm and Results

1989 ◽  
Vol 111 (1) ◽  
pp. 130-136 ◽  
Author(s):  
J. Z. Cha ◽  
R. W. Mayne

A discrete recursive quadratic programming algorithm is developed for a class of mixed discrete constrained nonlinear programming (MDCNP) problems. The symmetric rank one (SR1) Hessian update formula is used to generate second order information. Also, strategies, such as the watchdog technique (WT), the monotonicity analysis technique (MA), the contour analysis technique (CA), and the restoration of feasibility have been considered. Heuristic aspects of handling discrete variables are treated via the concepts and convergence discussions of Part I. This paper summarizes the details of the algorithm and its implementation. Test results for 25 different problems are presented to allow evaluation of the approach and provide a basis for performance comparison. The results show that the suggested method is a promising one, efficient and robust for the MDCNP problem.

Author(s):  
J. C. Cha ◽  
R. W. Wayne

Abstract A discrete recursive quadratic programming algorithm is developed for mixed discrete constrained nonlinear progrmming (MDCNP) problems. The symmetric rank one (SR1) Hessian update formula is used to generate second order information. Also, strategies, such as the watchdog technique (WT), the monotonicity analysis technique (MA), the contour analysis technique (CA) and the restoration strategy of feasibility have been considered. Heuristic aspects of handling discrete variables are treated via the concepts and convergence discussions of Part I. This paper summarizes the details of the algorithm and its implementation. Test results for 25 different problems are presented to allow evaluation of this approach and provide a basis for performance comparison. The results show that the suggested method is a promising one, efficient and robust for the MDCNP problem.


1991 ◽  
Vol 113 (3) ◽  
pp. 280-285 ◽  
Author(s):  
T. J. Beltracchi ◽  
G. A. Gabriele

The Recursive Quadratic Programming (RQP) method has become known as one of the most effective and efficient algorithms for solving engineering optimization problems. The RQP method uses variable metric updates to build approximations of the Hessian of the Lagrangian. If the approximation of the Hessian of the Lagrangian converges to the true Hessian of the Lagrangian, then the RQP method converges quadratically. The choice of a variable metric update has a direct effect on the convergence of the Hessian approximation. Most of the research performed with the RQP method uses some modification of the Broyden-Fletcher-Shanno (BFS) variable metric update. This paper describes a hybrid variable metric update that yields good approximations to the Hessian of the Lagrangian. The hybrid update combines the best features of the Symmetric Rank One and BFS updates, but is less sensitive to inexact line searches than the BFS update, and is more stable than the SR1 update. Testing of the method shows that the efficiency of the RQP method is unaffected by the new update but more accurate Hessian approximations are produced. This should increase the accuracy of the solutions obtained with the RQP method, and more importantly, provide more reliable information for post optimality analyses, such as parameter sensitivity studies.


Author(s):  
T. J. Beltracchi ◽  
G. A. Gabriele

Abstract The Recursive Quadratic Programming (RQP) method has been shown to be one of the most effective and efficient algorithms for solving engineering optimization problems. The RQP method uses variable metric updates to build approximations of the Hessian of the Lagrangian. If the approximation of the Hessian of the Lagrangian converges to the true Hessian of the Lagrangian, then the RQP method converges quadratically. The convergence of the Hessian approximation is affected by the choice of the variable metric update. Most of the research that has been performed with the RQP method uses the Broyden Fletcher Shanno (BFS) or Symmetric Rank One (SR1) variable metric update. The SR1 update has been shown to yield better estimates of the Hessian of the Lagrangian than those found when the BFS update is used, though there are cases where the SR1 update becomes unstable. This paper describes a hybrid variable metric update that is shown to yield good approximations of the Hessian of the Lagrangian. The hybrid update combines the best features of the SRI and BFS updates and is more stable than the SR1 update. Testing of the method shows that the efficiency of the RQP method is not affected by the new update, but more accurate Hessian approximations are produced. This should increase the accuracy of the solutions and provide more reliable information for post optimality analyses, such as parameter sensitivity studies.


Author(s):  
J. Z. Cha ◽  
R. W. Mayne

Abstract The hereditary properties of the Symmetric Rank One (SRI) update formula for numerically accumulating second order derivative information are studied. The unique advantage of the SR1 formula is that it does not require specific search directions for development of the Hessian matrix. This is an attractive feature for optimization applications where arbitrary search directions may be necessary. This paper explores the use of the SR1 formula within a procedure based on recursive quadratic programming (RQP) for solving a class of mixed discrete constrained nonlinear programming (MDCNP) problems. Theoretical considerations are presented along with numerical examples which illustrate the procedure and the utility of SR1.


1991 ◽  
Vol 113 (3) ◽  
pp. 312-317 ◽  
Author(s):  
J. Z. Cha ◽  
R. W. Mayne

The Symmetric Rank One (SR1) update formula is studied for its use in numerically accumulating second order derivative information for optimization. The unique advantage of the SR1 formula is that it does not require specific search directions for development of the Hessian matrix. This is an attractive feature for optimization applications where arbitrary search directions may be necessary. This paper explores the use of the SR1 formula within a procedure based on recursive quadratic programming (RQP) for solving a class of mixed discrete constrained nonlinear programming (MDCNP) problems. Theoretical considerations are presented along with numerical examples which illustrate the procedure and the utility of SR1.


1989 ◽  
Vol 111 (1) ◽  
pp. 124-129 ◽  
Author(s):  
J. Z. Cha ◽  
R. W. Mayne

Although a variety of algorithms for discrete nonlinear programming have been proposed, the solution of discrete optimization problems is far from mature compared to continuous optimization techniques. This paper focuses on the recursive quadratic programming strategy which has proven to be efficient and robust for continuous optimization. The procedure is adapted to consider a class of mixed discrete nonlinear programming problems and utilizes the analytical properties of functions and constraints. This first part of the paper considers definitions, concepts, and possible convergence criteria. Part II includes the development and testing of the algorithm.


1985 ◽  
Vol 107 (4) ◽  
pp. 459-462 ◽  
Author(s):  
J. Zhou ◽  
R. W. Mayne

This paper considers the use of an active set strategy based on monotonicity analysis as an integral part of a recursive quadratic programming (RQP) algorithm for constrained nonlinear optimization. Biggs’ RQP method employing equality constrained subproblems is the basis for the algorithm developed here and requires active set information. The monotonicity analysis strategy is applied to the sequence of search directions selected by the RQP method. As each direction is considered, progress toward optimum occurs and a new constraint is added to the active set. As the active set is finalized the basic RQP method is followed unless a constraint is to be dropped. Testing of the proposed algorithm illustrates its promise as an enhancement to Biggs’ original procedure.


Sign in / Sign up

Export Citation Format

Share Document