General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems

2019 ◽  
Vol 73 (1) ◽  
pp. 129-158 ◽  
Author(s):  
Zhongming Wu ◽  
Min Li
2020 ◽  
Vol 30 (1) ◽  
pp. 210-239 ◽  
Author(s):  
Shixiang Chen ◽  
Shiqian Ma ◽  
Anthony Man-Cho So ◽  
Tong Zhang

2018 ◽  
Vol 2018 ◽  
pp. 1-9
Author(s):  
Miao Chen ◽  
Shou-qiang Du

We study the method for solving a kind of nonsmooth optimization problems with l1-norm, which is widely used in the problem of compressed sensing, image processing, and some related optimization problems with wide application background in engineering technology. Transformated by the absolute value equations, this kind of nonsmooth optimization problem is rewritten as a general unconstrained optimization problem, and the transformed problem is solved by a smoothing FR conjugate gradient method. Finally, the numerical experiments show the effectiveness of the given smoothing FR conjugate gradient method.


Author(s):  
A. V. Luita ◽  
S. O. Zhilina ◽  
V. V. Semenov

In this paper, problems of bi-level convex minimization in a Hilbert space are considered. The bi-level convex minimization problem is to minimize the first convex function on the set of minima of the second convex function. This setting has many applications, but the implicit constraints generated by the internal problem make it difficult to obtain optimality conditions and construct algorithms. Multilevel optimization problems are formulated in a similar way, the source of which is the operation research problems (optimization according to sequentially specified criteria or lexicographic optimization). Attention is focused on problem solving using two proximal methods. The main theoretical results are theorems on the convergence of methods in various situations. The first of the methods is obtained by combining the penalty function method and the proximal method. Strong convergence is proved in the case of strong convexity of the function of the exterior problem. In the general case, only weak convergence has been proved. The second, the so-called proximal-gradient method, is a combination of one of the variants of the fast proximal-gradient algorithm with the method of penalty functions. The rates of convergence of the proximal-gradient method and its weak convergence are proved.


Sign in / Sign up

Export Citation Format

Share Document