scholarly journals LMBOPT: a limited memory method for bound-constrained optimization

Author(s):  
Morteza Kimiaei ◽  
Arnold Neumaier ◽  
Behzad Azmi

AbstractRecently, Neumaier and Azmi gave a comprehensive convergence theory for a generic algorithm for bound constrained optimization problems with a continuously differentiable objective function. The algorithm combines an active set strategy with a gradient-free line search along a piecewise linear search path defined by directions chosen to reduce zigzagging. This paper describes , an efficient implementation of this scheme. It employs new limited memory techniques for computing the search directions, improves by adding various safeguards relevant when finite precision arithmetic is used, and adds many practical enhancements in other details. The paper compares and several other solvers on the unconstrained and bound constrained problems from the collection and makes recommendations on which solver to use and when. Depending on the problem class, the problem dimension, and the precise goal, the best solvers are , , and .

Author(s):  
Morteza Kimiaei

AbstractThis paper discusses an active set trust-region algorithm for bound-constrained optimization problems. A sufficient descent condition is used as a computational measure to identify whether the function value is reduced or not. To get our complexity result, a critical measure is used which is computationally better than the other known critical measures. Under the positive definiteness of approximated Hessian matrices restricted to the subspace of non-active variables, it will be shown that unlimited zigzagging cannot occur. It is shown that our algorithm is competitive in comparison with the state-of-the-art solvers for solving an ill-conditioned bound-constrained least-squares problem.


2014 ◽  
Vol 2014 ◽  
pp. 1-9
Author(s):  
Qiuyu Wang ◽  
Yingtao Che

A practical algorithm for solving large-scale box-constrained optimization problems is developed, analyzed, and tested. In the proposed algorithm, an identification strategy is involved to estimate the active set at per-iteration. The components of inactive variables are determined by the steepest descent method at first finite number of steps and then by conjugate gradient method subsequently. Under some appropriate conditions, we show that the algorithm converges globally. Numerical experiments and comparisons by using some box-constrained problems from CUTEr library are reported. Numerical comparisons illustrate that the proposed method is promising and competitive with the well-known method—L-BFGS-B.


1995 ◽  
Vol 16 (5) ◽  
pp. 1190-1208 ◽  
Author(s):  
Richard H. Byrd ◽  
Peihuang Lu ◽  
Jorge Nocedal ◽  
Ciyou Zhu

Sign in / Sign up

Export Citation Format

Share Document