scholarly journals Accumulative Approach in Multistep Diagonal Gradient-Type Method for Large-Scale Unconstrained Optimization

2012 ◽  
Vol 2012 ◽  
pp. 1-11 ◽  
Author(s):  
Mahboubeh Farid ◽  
Wah June Leong ◽  
Lihong Zheng

This paper focuses on developing diagonal gradient-type methods that employ accumulative approach in multistep diagonal updating to determine a better Hessian approximation in each step. The interpolating curve is used to derive a generalization of the weak secant equation, which will carry the information of the local Hessian. The new parameterization of the interpolating curve in variable space is obtained by utilizing accumulative approach via a norm weighting defined by two positive definite weighting matrices. We also note that the storage needed for all computation of the proposed method is justO(n). Numerical results show that the proposed algorithm is efficient and superior by comparison with some other gradient-type methods.

2013 ◽  
Vol 2013 ◽  
pp. 1-5 ◽  
Author(s):  
Mahboubeh Farid ◽  
Wah June Leong ◽  
Najmeh Malekmohammadi ◽  
Mustafa Mamat

We present a new gradient method that uses scaling and extra updating within the diagonal updating for solving unconstrained optimization problem. The new method is in the frame of Barzilai and Borwein (BB) method, except that the Hessian matrix is approximated by a diagonal matrix rather than the multiple of identity matrix in the BB method. The main idea is to design a new diagonal updating scheme that incorporates scaling to instantly reduce the large eigenvalues of diagonal approximation and otherwise employs extra updates to increase small eigenvalues. These approaches give us a rapid control in the eigenvalues of the updating matrix and thus improve stepwise convergence. We show that our method is globally convergent. The effectiveness of the method is evaluated by means of numerical comparison with the BB method and its variant.


2014 ◽  
Vol 2014 ◽  
pp. 1-14
Author(s):  
San-Yang Liu ◽  
Yuan-Yuan Huang

This paper investigates a general form of guaranteed descent conjugate gradient methods which satisfies the descent conditiongkTdk≤-1-1/4θkgk2  θk>1/4and which is strongly convergent whenever the weak Wolfe line search is fulfilled. Moreover, we present several specific guaranteed descent conjugate gradient methods and give their numerical results for large-scale unconstrained optimization.


2013 ◽  
Vol 2013 ◽  
pp. 1-6
Author(s):  
Can Li

We are concerned with the nonnegative constraints optimization problems. It is well known that the conjugate gradient methods are efficient methods for solving large-scale unconstrained optimization problems due to their simplicity and low storage. Combining the modified Polak-Ribière-Polyak method proposed by Zhang, Zhou, and Li with the Zoutendijk feasible direction method, we proposed a conjugate gradient type method for solving the nonnegative constraints optimization problems. If the current iteration is a feasible point, the direction generated by the proposed method is always a feasible descent direction at the current iteration. Under appropriate conditions, we show that the proposed method is globally convergent. We also present some numerical results to show the efficiency of the proposed method.


Algorithms ◽  
2021 ◽  
Vol 14 (8) ◽  
pp. 227
Author(s):  
Zabidin Salleh ◽  
Ghaliah Alhamzi ◽  
Ibitsam Masmali ◽  
Ahmad Alhawarat

The conjugate gradient method is one of the most popular methods to solve large-scale unconstrained optimization problems since it does not require the second derivative, such as Newton’s method or approximations. Moreover, the conjugate gradient method can be applied in many fields such as neural networks, image restoration, etc. Many complicated methods are proposed to solve these optimization functions in two or three terms. In this paper, we propose a simple, easy, efficient, and robust conjugate gradient method. The new method is constructed based on the Liu and Storey method to overcome the convergence problem and descent property. The new modified method satisfies the convergence properties and the sufficient descent condition under some assumptions. The numerical results show that the new method outperforms famous CG methods such as CG-Descent5.3, Liu and Storey, and Dai and Liao. The numerical results include the number of iterations and CPU time.


2012 ◽  
Vol 29 (04) ◽  
pp. 1250021 ◽  
Author(s):  
LUJIN GONG

This paper presents a trust region subspace method for minimizing large-scale unconstrained problems. We choose a subspace that consists of some old directions which are invariable and some newest directions which are changed at each iteration. A restart technique is used when the old directions have little contribution. Numerical results are reported which indicate that the method is promising.


2010 ◽  
Vol 27 (01) ◽  
pp. 55-69 ◽  
Author(s):  
YAN ZHANG ◽  
WENYU SUN ◽  
LIQUN QI

In this paper we present a new globalization strategy of the Barzilai-Borwein gradient method for large scale unconstrained optimization. Based on the filter technique introduced by Fletcher and Leyffer, a modified Barzilai-Borwein method is presented. We prove the global convergence of this method. Extensive numerical results on a set of CUTEr test problems show that the proposed method is competitive.


Sign in / Sign up

Export Citation Format

Share Document