Accelerated Primal-Dual Gradient Descent with Linesearch for Convex, Nonconvex, and Nonsmooth Optimization Problems

2019 ◽  
Vol 99 (2) ◽  
pp. 125-128 ◽  
Author(s):  
S. V. Guminov ◽  
Yu. E. Nesterov ◽  
P. E. Dvurechensky ◽  
A. V. Gasnikov
2019 ◽  
Vol 485 (1) ◽  
pp. 15-18
Author(s):  
S. V. Guminov ◽  
Yu. E. Nesterov ◽  
P. E. Dvurechensky ◽  
A. V. Gasnikov

In this paper a new variant of accelerated gradient descent is proposed. The proposed method does not require any information about the objective function, uses exact line search for the practical accelerations of convergence, converges according to the well-known lower bounds for both convex and non-convex objective functions and possesses primal-dual properties. We also provide a universal version of said method, which converges according to the known lower bounds for both smooth and non-smooth problems.


2020 ◽  
Vol 65 (4) ◽  
pp. 1800-1806 ◽  
Author(s):  
Yue Wei ◽  
Hao Fang ◽  
Xianlin Zeng ◽  
Jie Chen ◽  
Panos Pardalos

2015 ◽  
Vol 2015 ◽  
pp. 1-9
Author(s):  
Jing Liu ◽  
Huicheng Liu

This paper presents an application of the canonical duality theory for box constrained nonconvex and nonsmooth optimization problems. By use of the canonical dual transformation method, which is developed recently, these very difficult constrained optimization problems inRncan be converted into the canonical dual problems, which can be solved by deterministic methods. The global and local extrema can be identified by the triality theory. Some examples are listed to illustrate the applications of the theory presented in the paper.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Ming Huang ◽  
Li-Ping Pang ◽  
Xi-Jun Liang ◽  
Zun-Quan Xia

We study optimization problems involving eigenvalues of symmetric matrices. We present a nonsmooth optimization technique for a class of nonsmooth functions which are semi-infinite maxima of eigenvalue functions. Our strategy uses generalized gradients and𝒰𝒱space decomposition techniques suited for the norm and other nonsmooth performance criteria. For the class of max-functions, which possesses the so-called primal-dual gradient structure, we compute smooth trajectories along which certain second-order expansions can be obtained. We also give the first- and second-order derivatives of primal-dual function in the space of decision variablesRmunder some assumptions.


2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Hamid Reza Erfanian ◽  
M. H. Noori Skandari ◽  
A. V. Kamyad

We present a new approach for solving nonsmooth optimization problems and a system of nonsmooth equations which is based on generalized derivative. For this purpose, we introduce the first order of generalized Taylor expansion of nonsmooth functions and replace it with smooth functions. In other words, nonsmooth function is approximated by a piecewise linear function based on generalized derivative. In the next step, we solve smooth linear optimization problem whose optimal solution is an approximate solution of main problem. Then, we apply the results for solving system of nonsmooth equations. Finally, for efficiency of our approach some numerical examples have been presented.


2018 ◽  
Vol 30 (7) ◽  
pp. 2005-2023 ◽  
Author(s):  
Tomoumi Takase ◽  
Satoshi Oyama ◽  
Masahito Kurihara

We present a comprehensive framework of search methods, such as simulated annealing and batch training, for solving nonconvex optimization problems. These methods search a wider range by gradually decreasing the randomness added to the standard gradient descent method. The formulation that we define on the basis of this framework can be directly applied to neural network training. This produces an effective approach that gradually increases batch size during training. We also explain why large batch training degrades generalization performance, which previous studies have not clarified.


Sign in / Sign up

Export Citation Format

Share Document