scholarly journals The Smoothing FR Conjugate Gradient Method for Solving a Kind of Nonsmooth Optimization Problem with l1-Norm

2018 ◽  
Vol 2018 ◽  
pp. 1-9
Author(s):  
Miao Chen ◽  
Shou-qiang Du

We study the method for solving a kind of nonsmooth optimization problems with l1-norm, which is widely used in the problem of compressed sensing, image processing, and some related optimization problems with wide application background in engineering technology. Transformated by the absolute value equations, this kind of nonsmooth optimization problem is rewritten as a general unconstrained optimization problem, and the transformed problem is solved by a smoothing FR conjugate gradient method. Finally, the numerical experiments show the effectiveness of the given smoothing FR conjugate gradient method.

2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Shengwei Yao ◽  
Xiwen Lu ◽  
Bin Qin

The conjugate gradient (CG) method has played a special role in solving large-scale nonlinear optimization problems due to the simplicity of their very low memory requirements. In this paper, we propose a new conjugacy condition which is similar to Dai-Liao (2001). Based on this condition, the related nonlinear conjugate gradient method is given. With some mild conditions, the given method is globally convergent under the strong Wolfe-Powell line search for general functions. The numerical experiments show that the proposed method is very robust and efficient.


Author(s):  
Fahimeh Abdollahi ◽  
M. Fatemi

We propose an effective conjugate gradient method belonging to the class of Dai-Liao methods for solving unconstrained optimization problems. We employ a variant of the modified secant condition, and introduce a new conjugate gradient parameter by solving an optimization problem. Optimization problem combines the well-known features of the linear conjugate gradient method using some penalty functions. This new parameter takes advantage of function information as well as the gradient information to provide the iterations. Our proposed method is globally convergent under mild assumptions. We examine the ability of the method for solving some real world problems from image processing field. Numerical results show that the proposed method is efficient in the sense of PSNR test. We also compare our proposed method with some well-known existing algorithms using a collection of CUTEr problems to show it's efficiency.


2019 ◽  
Vol 2019 ◽  
pp. 1-9
Author(s):  
Jiankun Liu ◽  
Shouqiang Du

We propose a modified three-term conjugate gradient method with the Armijo line search for solving unconstrained optimization problems. The proposed method possesses the sufficient descent property. Under mild assumptions, the global convergence property of the proposed method with the Armijo line search is proved. Due to simplicity, low storage, and nice convergence properties, the proposed method is used to solve M-tensor systems and a kind of nonsmooth optimization problems with l1-norm. Finally, the given numerical experiments show the efficiency of the proposed method.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Min Sun ◽  
Jing Liu

Recently, Zhang et al. proposed a sufficient descent Polak-Ribière-Polyak (SDPRP) conjugate gradient method for large-scale unconstrained optimization problems and proved its global convergence in the sense thatlim infk→∞∥∇f(xk)∥=0when an Armijo-type line search is used. In this paper, motivated by the line searches proposed by Shi et al. and Zhang et al., we propose two new Armijo-type line searches and show that the SDPRP method has strong convergence in the sense thatlimk→∞∥∇f(xk)∥=0under the two new line searches. Numerical results are reported to show the efficiency of the SDPRP with the new Armijo-type line searches in practical computation.


Author(s):  
B. K. Kannan ◽  
Steven N. Kramer

Abstract An algorithm for solving nonlinear optimization problems involving discrete, integer, zero-one and continuous variables is presented. The augmented Lagrange multiplier method combined with Powell’s method and Fletcher & Reeves Conjugate Gradient method are used to solve the optimization problem where penalties are imposed on the constraints for integer / discrete violations. The use of zero-one variables as a tool for conceptual design optimization is also described with an example. Several case studies have been presented to illustrate the practical use of this algorithm. The results obtained are compared with those obtained by the Branch and Bound algorithm. Also, a comparison is made between the use of Powell’s method (zeroth order) and the Conjugate Gradient method (first order) in the solution of these mixed variable optimization problems.


2007 ◽  
Vol 2007 ◽  
pp. 1-19 ◽  
Author(s):  
Shang Shang ◽  
Jing Bai ◽  
Xiaolei Song ◽  
Hongkai Wang ◽  
Jaclyn Lau

Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Zhan Wang ◽  
Pengyuan Li ◽  
Xiangrong Li ◽  
Hongtruong Pham

Conjugate gradient methods are well-known methods which are widely applied in many practical fields. CD conjugate gradient method is one of the classical types. In this paper, a modified three-term type CD conjugate gradient algorithm is proposed. Some good features are presented as follows: (i) A modified three-term type CD conjugate gradient formula is presented. (ii) The given algorithm possesses sufficient descent property and trust region property. (iii) The algorithm has global convergence with the modified weak Wolfe–Powell (MWWP) line search technique and projection technique for general function. The new algorithm has made great progress in numerical experiments. It shows that the modified three-term type CD conjugate gradient method is more competitive than the classical CD conjugate gradient method.


Sign in / Sign up

Export Citation Format

Share Document