GLOBAL CONVERGENCE OF TWO KINDS OF THREE-TERM CONJUGATE GRADIENT METHODS WITHOUT LINE SEARCH

2013 ◽  
Vol 30 (01) ◽  
pp. 1250043
Author(s):  
LIANG YIN ◽  
XIONGDA CHEN

The conjugate gradient method is widely used in unconstrained optimization, especially for large-scale problems. Recently, Zhang et al. proposed a three-term PRP method (TTPRP) and a three-term HS method (TTHS), both of which can produce sufficient descent conditions. In this paper, the global convergence of the TTPRP and TTHS methods is studied, in which the line search procedure is replaced by a fixed formula of stepsize. This character is of significance when the line search is expensive in some particular applications. In addition, relevant computational results are also presented.

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Bakhtawar Baluch ◽  
Zabidin Salleh ◽  
Ahmad Alhawarat

This paper describes a modified three-term Hestenes–Stiefel (HS) method. The original HS method is the earliest conjugate gradient method. Although the HS method achieves global convergence using an exact line search, this is not guaranteed in the case of an inexact line search. In addition, the HS method does not usually satisfy the descent property. Our modified three-term conjugate gradient method possesses a sufficient descent property regardless of the type of line search and guarantees global convergence using the inexact Wolfe–Powell line search. The numerical efficiency of the modified three-term HS method is checked using 75 standard test functions. It is known that three-term conjugate gradient methods are numerically more efficient than two-term conjugate gradient methods. Importantly, this paper quantifies how much better the three-term performance is compared with two-term methods. Thus, in the numerical results, we compare our new modification with an efficient two-term conjugate gradient method. We also compare our modification with a state-of-the-art three-term HS method. Finally, we conclude that our proposed modification is globally convergent and numerically efficient.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
San-Yang Liu ◽  
Yuan-Yuan Huang ◽  
Hong-Wei Jiao

Two unified frameworks of some sufficient descent conjugate gradient methods are considered. Combined with the hyperplane projection method of Solodov and Svaiter, they are extended to solve convex constrained nonlinear monotone equations. Their global convergence is proven under some mild conditions. Numerical results illustrate that these methods are efficient and can be applied to solve large-scale nonsmooth equations.


2005 ◽  
Vol 22 (04) ◽  
pp. 529-538 ◽  
Author(s):  
XIA LI ◽  
XIONGDA CHEN

The shortest-residual family of conjugate gradient methods was first proposed by Hestenes and was studied by Pytlak, and Dai and Yuan. Recently, a no-line-search scheme in conjugate gradient methods was given by Sun and Zhang, and Chen and Sun. In this paper, we show the global convergence of two shortest-residual conjugate gradient methods (FRSR and PRPSR) without line search. In addition, computational results are presented to show that the methods with line search have similar numerical behavior to the methods without line search.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Meixing Liu ◽  
Guodong Ma ◽  
Jianghua Yin

The conjugate gradient method is very effective in solving large-scale unconstrained optimal problems. In this paper, on the basis of the conjugate parameter of the conjugate descent (CD) method and the second inequality in the strong Wolfe line search, two new conjugate parameters are devised. Using the strong Wolfe line search to obtain the step lengths, two modified conjugate gradient methods are proposed for general unconstrained optimization. Under the standard assumptions, the two presented methods are proved to be sufficient descent and globally convergent. Finally, preliminary numerical results are reported to show that the proposed methods are promising.


2014 ◽  
Vol 2014 ◽  
pp. 1-14
Author(s):  
San-Yang Liu ◽  
Yuan-Yuan Huang

This paper investigates a general form of guaranteed descent conjugate gradient methods which satisfies the descent conditiongkTdk≤-1-1/4θkgk2  θk>1/4and which is strongly convergent whenever the weak Wolfe line search is fulfilled. Moreover, we present several specific guaranteed descent conjugate gradient methods and give their numerical results for large-scale unconstrained optimization.


Author(s):  
Amina Boumediene ◽  
Tahar Bechouat ◽  
Rachid Benzine ◽  
Ghania Hadji

The nonlinear Conjugate gradient method (CGM) is a very effective way in solving large-scale optimization problems. Zhang et al. proposed a new CG coefficient which is defined by [Formula: see text]. They proved the sufficient descent condition and the global convergence for nonconvex minimization in strong Wolfe line search. In this paper, we prove that this CG coefficient possesses sufficient descent conditions and global convergence properties under the exact line search.


Sign in / Sign up

Export Citation Format

Share Document