scholarly journals Global convergence of a two-parameter family of conjugate gradient methods without line search

2002 ◽  
Vol 146 (1) ◽  
pp. 37-45 ◽  
Author(s):  
Xiongda Chen ◽  
Jie Sun
2008 ◽  
Vol 25 (03) ◽  
pp. 411-420 ◽  
Author(s):  
HUI ZHU ◽  
XIONGDA CHEN

Conjugate gradient methods are efficient to minimize differentiable objective functions in large dimension spaces. Recently, Dai and Yuan introduced a tree-parameter family of nonlinear conjugate gradient methods and show their convergence. However, line search strategies usually bring computational burden. To overcome this problem, in this paper, we study the global convergence of a special case of three-parameter family(the CD-DY family) in which the line search procedures are replaced by fixed formulae of stepsize.


2019 ◽  
Vol 38 (6) ◽  
pp. 127-140
Author(s):  
Bouaziz Khelifa ◽  
Laskri Yamina

We prove the global convergence of a two-parameter family of conjugate gradient methods that use a new and different formula of stepsize from Wu \cite% {14}. Numerical results are presented to confirm the effectiveness of the proposed stepsizes by comparing with the stepsizes suggested by Sun and his colleagues \cite% {2, 12}.\\


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Bakhtawar Baluch ◽  
Zabidin Salleh ◽  
Ahmad Alhawarat

This paper describes a modified three-term Hestenes–Stiefel (HS) method. The original HS method is the earliest conjugate gradient method. Although the HS method achieves global convergence using an exact line search, this is not guaranteed in the case of an inexact line search. In addition, the HS method does not usually satisfy the descent property. Our modified three-term conjugate gradient method possesses a sufficient descent property regardless of the type of line search and guarantees global convergence using the inexact Wolfe–Powell line search. The numerical efficiency of the modified three-term HS method is checked using 75 standard test functions. It is known that three-term conjugate gradient methods are numerically more efficient than two-term conjugate gradient methods. Importantly, this paper quantifies how much better the three-term performance is compared with two-term methods. Thus, in the numerical results, we compare our new modification with an efficient two-term conjugate gradient method. We also compare our modification with a state-of-the-art three-term HS method. Finally, we conclude that our proposed modification is globally convergent and numerically efficient.


2005 ◽  
Vol 22 (04) ◽  
pp. 529-538 ◽  
Author(s):  
XIA LI ◽  
XIONGDA CHEN

The shortest-residual family of conjugate gradient methods was first proposed by Hestenes and was studied by Pytlak, and Dai and Yuan. Recently, a no-line-search scheme in conjugate gradient methods was given by Sun and Zhang, and Chen and Sun. In this paper, we show the global convergence of two shortest-residual conjugate gradient methods (FRSR and PRPSR) without line search. In addition, computational results are presented to show that the methods with line search have similar numerical behavior to the methods without line search.


2013 ◽  
Vol 30 (01) ◽  
pp. 1250043
Author(s):  
LIANG YIN ◽  
XIONGDA CHEN

The conjugate gradient method is widely used in unconstrained optimization, especially for large-scale problems. Recently, Zhang et al. proposed a three-term PRP method (TTPRP) and a three-term HS method (TTHS), both of which can produce sufficient descent conditions. In this paper, the global convergence of the TTPRP and TTHS methods is studied, in which the line search procedure is replaced by a fixed formula of stepsize. This character is of significance when the line search is expensive in some particular applications. In addition, relevant computational results are also presented.


2014 ◽  
Vol 2014 ◽  
pp. 1-14
Author(s):  
San-Yang Liu ◽  
Yuan-Yuan Huang

This paper investigates a general form of guaranteed descent conjugate gradient methods which satisfies the descent conditiongkTdk≤-1-1/4θkgk2  θk>1/4and which is strongly convergent whenever the weak Wolfe line search is fulfilled. Moreover, we present several specific guaranteed descent conjugate gradient methods and give their numerical results for large-scale unconstrained optimization.


Sign in / Sign up

Export Citation Format

Share Document