scholarly journals A Penalized Linear and Nonlinear Combined Conjugate Gradient Method for the Reconstruction of Fluorescence Molecular Tomography

2007 ◽  
Vol 2007 ◽  
pp. 1-19 ◽  
Author(s):  
Shang Shang ◽  
Jing Bai ◽  
Xiaolei Song ◽  
Hongkai Wang ◽  
Jaclyn Lau

Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.

2018 ◽  
Vol 29 (1) ◽  
pp. 133
Author(s):  
Basim A. Hassan ◽  
Haneen A. Alashoor

The nonlinear conjugate gradient method is widely used to solve unconstrained optimization problems. In this paper the development of different versions of nonlinear conjugate gradient methods with global convergence properties proved. Numerical results indicated that the proposed method is very efficient.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Zhan Wang ◽  
Pengyuan Li ◽  
Xiangrong Li ◽  
Hongtruong Pham

Conjugate gradient methods are well-known methods which are widely applied in many practical fields. CD conjugate gradient method is one of the classical types. In this paper, a modified three-term type CD conjugate gradient algorithm is proposed. Some good features are presented as follows: (i) A modified three-term type CD conjugate gradient formula is presented. (ii) The given algorithm possesses sufficient descent property and trust region property. (iii) The algorithm has global convergence with the modified weak Wolfe–Powell (MWWP) line search technique and projection technique for general function. The new algorithm has made great progress in numerical experiments. It shows that the modified three-term type CD conjugate gradient method is more competitive than the classical CD conjugate gradient method.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Shengwei Yao ◽  
Xiwen Lu ◽  
Bin Qin

The conjugate gradient (CG) method has played a special role in solving large-scale nonlinear optimization problems due to the simplicity of their very low memory requirements. In this paper, we propose a new conjugacy condition which is similar to Dai-Liao (2001). Based on this condition, the related nonlinear conjugate gradient method is given. With some mild conditions, the given method is globally convergent under the strong Wolfe-Powell line search for general functions. The numerical experiments show that the proposed method is very robust and efficient.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Shengwei Yao ◽  
Bin Qin

The conjugate gradient method is an efficient method for solving large-scale nonlinear optimization problems. In this paper, we propose a nonlinear conjugate gradient method which can be considered as a hybrid of DL and WYL conjugate gradient methods. The given method possesses the sufficient descent condition under the Wolfe-Powell line search and is globally convergent for general functions. Our numerical results show that the proposed method is very robust and efficient for the test problems.


Author(s):  
Samson Akinwale ◽  
O. O. Okundalaye

In a class of solving unconstrained optimization problems, the conjugate gradient method has been proved to be efficient by researchers' due to it's smaller storage requirements and computational cost. Then, a class of penalty algorithms based on three-term conjugate gradient methods was developed and extend to and solution of an unconstrained minimization portfolio management problems, where the objective function is a piecewise quadratic polynomial. By implementing the proposed algorithm to solve some selected unconstrained optimization problems, resulted in improvement in the total number of iterations and CPU time. It was shown that this algorithm is promising.


2014 ◽  
Vol 989-994 ◽  
pp. 2406-2409
Author(s):  
Ting Feng Li ◽  
Zhi Yuan Liu ◽  
Zhao Bin Du

In this paper, we introduce an algorithm for solving large-scale box-constrained optimization problems. At each iteration of the proposed algorithm, we first estimate the active set by means of an active set identification technique. The components of the search direction corresponding to the active set are simply defined; the other components are determined by nonlinear conjugate gradient method. Under some additional conditions, we show that the algorithm converges globally. We also report some preliminary numerical experiments to show that the proposed algorithm is practicable and effective for the test problems.


Sign in / Sign up

Export Citation Format

Share Document