scholarly journals Issues on the use of a modified Bunch and Kaufman decomposition for large scale Newton’s equation

2020 ◽  
Vol 77 (3) ◽  
pp. 627-651
Author(s):  
Andrea Caliciotti ◽  
Giovanni Fasano ◽  
Florian Potra ◽  
Massimo Roma

AbstractIn this work, we deal with Truncated Newton methods for solving large scale (possibly nonconvex) unconstrained optimization problems. In particular, we consider the use of a modified Bunch and Kaufman factorization for solving the Newton equation, at each (outer) iteration of the method. The Bunch and Kaufman factorization of a tridiagonal matrix is an effective and stable matrix decomposition, which is well exploited in the widely adopted SYMMBK (Bunch and Kaufman in Math Comput 31:163–179, 1977; Chandra in Conjugate gradient methods for partial differential equations, vol 129, 1978; Conn et al. in Trust-region methods. MPS-SIAM series on optimization, Society for Industrial Mathematics, Philadelphia, 2000; HSL, A collection of Fortran codes for large scale scientific computation, http://www.hsl.rl.ac.uk/; Marcia in Appl Numer Math 58:449–458, 2008) routine. It can be used to provide conjugate directions, both in the case of $$1\times 1$$ 1 × 1 and $$2\times 2$$ 2 × 2 pivoting steps. The main drawback is that the resulting solution of Newton’s equation might not be gradient–related, in the case the objective function is nonconvex. Here we first focus on some theoretical properties, in order to ensure that at each iteration of the Truncated Newton method, the search direction obtained by using an adapted Bunch and Kaufman factorization is gradient–related. This allows to perform a standard Armijo-type linesearch procedure, using a bounded descent direction. Furthermore, the results of an extended numerical experience using large scale CUTEst problems is reported, showing the reliability and the efficiency of the proposed approach, both on convex and nonconvex problems.

2014 ◽  
Vol 19 (4) ◽  
pp. 469-490 ◽  
Author(s):  
Hamid Esmaeili ◽  
Morteza Kimiaei

In this study, we propose a trust-region-based procedure to solve unconstrained optimization problems that take advantage of the nonmonotone technique to introduce an efficient adaptive radius strategy. In our approach, the adaptive technique leads to decreasing the total number of iterations, while utilizing the structure of nonmonotone formula helps us to handle large-scale problems. The new algorithm preserves the global convergence and has quadratic convergence under suitable conditions. Preliminary numerical experiments on standard test problems indicate the efficiency and robustness of the proposed approach for solving unconstrained optimization problems.


2013 ◽  
Vol 2013 ◽  
pp. 1-6
Author(s):  
Can Li

We are concerned with the nonnegative constraints optimization problems. It is well known that the conjugate gradient methods are efficient methods for solving large-scale unconstrained optimization problems due to their simplicity and low storage. Combining the modified Polak-Ribière-Polyak method proposed by Zhang, Zhou, and Li with the Zoutendijk feasible direction method, we proposed a conjugate gradient type method for solving the nonnegative constraints optimization problems. If the current iteration is a feasible point, the direction generated by the proposed method is always a feasible descent direction at the current iteration. Under appropriate conditions, we show that the proposed method is globally convergent. We also present some numerical results to show the efficiency of the proposed method.


Author(s):  
O.B. Akinduko

In this paper, by linearly combining the numerator and denominator terms of the Dai-Liao (DL) and Bamigbola-Ali-Nwaeze (BAN) conjugate gradient methods (CGMs), a general form of DL-BAN method has been proposed. From this general form, a new hybrid CGM, which was found to possess a sufficient descent property is generated. Numerical experiment was carried out on the new CGM in comparison with four existing CGMs, using some set of large scale unconstrained optimization problems. The result showed a superior performance of new method over majority of the existing methods.


2014 ◽  
Vol 2014 ◽  
pp. 1-6 ◽  
Author(s):  
Mohd Asrul Hery Ibrahim ◽  
Mustafa Mamat ◽  
Wah June Leong

In solving large scale problems, the quasi-Newton method is known as the most efficient method in solving unconstrained optimization problems. Hence, a new hybrid method, known as the BFGS-CG method, has been created based on these properties, combining the search direction between conjugate gradient methods and quasi-Newton methods. In comparison to standard BFGS methods and conjugate gradient methods, the BFGS-CG method shows significant improvement in the total number of iterations and CPU time required to solve large scale unconstrained optimization problems. We also prove that the hybrid method is globally convergent.


Author(s):  
Sulaiman Mohammed Ibrahim ◽  
Usman Abbas Yakubu ◽  
Mustafa Mamat

Conjugate gradient (CG) methods are among the most efficient numerical methods for solving unconstrained optimization problems. This is due to their simplicty and  less computational cost in solving large-scale nonlinear problems. In this paper, we proposed some spectral CG methods using the classical CG search direction. The proposed methods are applied to real-life problems in regression analysis. Their convergence proof was establised under exact line search. Numerical results has shown that the proposed methods are efficient and promising.


Author(s):  
Hawraz N. Jabbar ◽  
Basim A. Hassan

<p>The conjugate gradient methods are noted to be exceedingly valuable for solving large-scale unconstrained optimization problems since it needn't the storage of matrices. Mostly the parameter conjugate is the focus for conjugate gradient methods. The current paper proposes new methods of parameter of conjugate gradient type to solve problems of large-scale unconstrained optimization. A Hessian approximation in a diagonal matrix form on the basis of second and third-order Taylor series expansion was employed in this study. The sufficient descent property for the proposed algorithm are proved. The new method was converged globally. This new algorithm is found to be competitive to the algorithm of fletcher-reeves (FR) in a number of numerical experiments.</p>


2011 ◽  
Vol 141 ◽  
pp. 92-97
Author(s):  
Miao Hu ◽  
Tai Yong Wang ◽  
Bo Geng ◽  
Qi Chen Wang ◽  
Dian Peng Li

Nonlinear least square is one of the unconstrained optimization problems. In order to solve the least square trust region sub-problem, a genetic algorithm (GA) of global convergence was applied, and the premature convergence of genetic algorithms was also overcome through optimizing the search range of GA with trust region method (TRM), and the convergence rate of genetic algorithm was increased by the randomness of the genetic search. Finally, an example of banana function was established to verify the GA, and the results show the practicability and precision of this algorithm.


2015 ◽  
Vol 2015 ◽  
pp. 1-8
Author(s):  
Yunlong Lu ◽  
Weiwei Yang ◽  
Wenyu Li ◽  
Xiaowei Jiang ◽  
Yueting Yang

A new trust region method is presented, which combines nonmonotone line search technique, a self-adaptive update rule for the trust region radius, and the weighting technique for the ratio between the actual reduction and the predicted reduction. Under reasonable assumptions, the global convergence of the method is established for unconstrained nonconvex optimization. Numerical results show that the new method is efficient and robust for solving unconstrained optimization problems.


Sign in / Sign up

Export Citation Format

Share Document