Multivariate spectral gradient method for unconstrained optimization

2008 ◽  
Vol 201 (1-2) ◽  
pp. 621-630 ◽  
Author(s):  
Le Han ◽  
Gaohang Yu ◽  
Lutai Guan
2021 ◽  
Vol 36 ◽  
pp. 04007
Author(s):  
Gillian Yi Han Woo ◽  
Hong Seng Sim ◽  
Yong Kheng Goh ◽  
Wah June Leong

In this paper, we propose to use spectral proximal method to solve sparse optimization problems. Sparse optimization refers to an optimization problem involving the ι0 -norm in objective or constraints. The previous research showed that the spectral gradient method is outperformed the other standard unconstrained optimization methods. This is due to spectral gradient method replaced the full rank matrix by a diagonal matrix and the memory decreased from Ο(n2) to Ο(n). Since ι0-norm term is nonconvex and non-smooth, it cannot be solved by standard optimization algorithm. We will solve the ι0 -norm problem with an underdetermined system as its constraint will be considered. Using Lagrange method, this problem is transformed into an unconstrained optimization problem. A new method called spectral proximal method is proposed, which is a combination of proximal method and spectral gradient method. The spectral proximal method is then applied to the ι0-norm unconstrained optimization problem. The programming code will be written in Python to compare the efficiency of the proposed method with some existing methods. The benchmarks of the comparison are based on number of iterations, number of functions call and the computational time. Theoretically, the proposed method requires less storage and less computational time.


2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Hong Seng Sim ◽  
Chuei Yee Chen ◽  
Wah June Leong ◽  
Jiao Li

<p style='text-indent:20px;'>This paper proposes a nonmonotone spectral gradient method for solving large-scale unconstrained optimization problems. The spectral parameter is derived from the eigenvalues of an optimally sized memoryless symmetric rank-one matrix obtained under the measure defined as a ratio of the determinant of updating matrix over its largest eigenvalue. Coupled with a nonmonotone line search strategy where backtracking-type line search is applied selectively, the spectral parameter acts as a stepsize during iterations when no line search is performed and as a milder form of quasi-Newton update when backtracking line search is employed. Convergence properties of the proposed method are established for uniformly convex functions. Extensive numerical experiments are conducted and the results indicate that our proposed spectral gradient method outperforms some standard conjugate-gradient methods.</p>


Sign in / Sign up

Export Citation Format

Share Document