scholarly journals On Gradient Descent and Co-ordinate Descent methods and its variants.

2020 ◽  
Vol 19 (3) ◽  
pp. 107-115
Author(s):  
Sajjadul Bari ◽  
Md. Rajib Arefin ◽  
Sohana Jahan

This research is focused on Unconstrained Optimization problems. Among a number of methods that can be used to solve Unconstrained Optimization problems we have worked on Gradient and Coordinate Descent methods. Step size plays an important role for optimization. Here we have performed numerical experiment with Gradient and Coordinate Descent method for several step size choices. Comparison between different variants of Gradient and Coordinate Descent methods and their efficiency are demonstrated by implementing in loss functions minimization problem.

2018 ◽  
Vol 16 (05) ◽  
pp. 741-755 ◽  
Author(s):  
Qin Fang ◽  
Min Xu ◽  
Yiming Ying

The problem of minimizing a separable convex function under linearly coupled constraints arises from various application domains such as economic systems, distributed control, and network flow. The main challenge for solving this problem is that the size of data is very large, which makes usual gradient-based methods infeasible. Recently, Necoara, Nesterov and Glineur [Random block coordinate descent methods for linearly constrained optimization over networks, J. Optim. Theory Appl. 173(1) (2017) 227–254] proposed an efficient randomized coordinate descent method to solve this type of optimization problems and presented an appealing convergence analysis. In this paper, we develop new techniques to analyze the convergence of such algorithms, which are able to greatly improve the results presented in the above. This refined result is achieved by extending Nesterov’s second technique [Efficiency of coordinate descent methods on huge-scale optimization problems, SIAM J. Optim. 22 (2012) 341–362] to the general optimization problems with linearly coupled constraints. A novel technique in our analysis is to establish the basis vectors for the subspace of the linear constraints.


2021 ◽  
Vol 5 (3) ◽  
pp. 110
Author(s):  
Shashi Kant Mishra ◽  
Predrag Rajković ◽  
Mohammad Esmael Samei ◽  
Suvra Kanti Chakraborty ◽  
Bhagwat Ram ◽  
...  

We present an algorithm for solving unconstrained optimization problems based on the q-gradient vector. The main idea used in the algorithm construction is the approximation of the classical gradient by a q-gradient vector. For a convex objective function, the quasi-Fejér convergence of the algorithm is proved. The proposed method does not require the boundedness assumption on any level set. Further, numerical experiments are reported to show the performance of the proposed method.


2010 ◽  
Vol 2010 ◽  
pp. 1-9 ◽  
Author(s):  
Ming-Liang Zhang ◽  
Yun-Hai Xiao ◽  
Dangzhen Zhou

We develop a sufficient descent method for solving large-scale unconstrained optimization problems. At each iteration, the search direction is a linear combination of the gradient at the current and the previous steps. An attractive property of this method is that the generated directions are always descent. Under some appropriate conditions, we show that the proposed method converges globally. Numerical experiments on some unconstrained minimization problems from CUTEr library are reported, which illustrate that the proposed method is promising.


Author(s):  
Branislav Ivanov ◽  
Predrag S. Stanimirović ◽  
Gradimir V. Milovanović ◽  
Snežana Djordjević ◽  
Ivona Brajević

2021 ◽  
Vol 15 ◽  
pp. 174830262110311
Author(s):  
Donghong Zhao ◽  
Yonghua Fan ◽  
Haoyu Liu ◽  
Yafeng Yang

The split Bregman algorithm and the coordinate descent method are efficient tools for solving optimization problems, which have been proven to be effective for the total variation model. We propose an algorithm for fractional total variation model in this paper, and employ the coordinate descent method to decompose the fractional-order minimization problem into scalar sub-problems, then solve the sub-problem by using split Bregman algorithm. Numerical results are presented in the end to demonstrate the superiority of the proposed algorithm.


2014 ◽  
Vol 8 (1) ◽  
pp. 218-221 ◽  
Author(s):  
Ping Hu ◽  
Zong-yao Wang

We propose a non-monotone line search combination rule for unconstrained optimization problems, the corresponding non-monotone search algorithm is established and its global convergence can be proved. Finally, we use some numerical experiments to illustrate the new combination of non-monotone search algorithm’s effectiveness.


Sign in / Sign up

Export Citation Format

Share Document