scholarly journals Reachability of optimal convergence rate estimates for high-order numerical convex optimization methods

2019 ◽  
Vol 484 (6) ◽  
pp. 667-671
Author(s):  
A. V. Gasnikov ◽  
E. A. Gorbunov ◽  
D. A. Kovalev ◽  
A. A. M. Mokhammed ◽  
E. A. Chernousova

The Monteiro-Svaiter accelerated hybrid proximal extragradient method (2013) with one step of Newton’s method used at every iteration for the approximate solution of an auxiliary problem is considered. The Monteiro-Svaiter method is optimal (with respect to the number of gradient and Hessian evaluations for the optimized function) for sufficiently smooth convex optimization problems in the class of methods using only the gradient and Hessian of the optimized function. An optimal tensor method involving higher derivatives is proposed by replacing Newton’s step with a step of Yu.E. Nesterov’s recently proposed tensor method (2018) and by using a special generalization of the step size selection condition in the outer accelerated proximal extragradient method. This tensor method with derivatives up to the third order inclusive is found fairly practical, since the complexity of its iteration is comparable with that of Newton’s one. Thus, a constructive solution is obtained for Nesterov’s problem (2018) of closing the gap between tight lower and overstated upper bounds for the convergence rate of existing tensor methods of order p ≥ 3.

2021 ◽  
Vol 12 (4) ◽  
pp. 98-116
Author(s):  
Noureddine Boukhari ◽  
Fatima Debbat ◽  
Nicolas Monmarché ◽  
Mohamed Slimane

Evolution strategies (ES) are a family of strong stochastic methods for global optimization and have proved their capability in avoiding local optima more than other optimization methods. Many researchers have investigated different versions of the original evolution strategy with good results in a variety of optimization problems. However, the convergence rate of the algorithm to the global optimum stays asymptotic. In order to accelerate the convergence rate, a hybrid approach is proposed using the nonlinear simplex method (Nelder-Mead) and an adaptive scheme to control the local search application, and the authors demonstrate that such combination yields significantly better convergence. The new proposed method has been tested on 15 complex benchmark functions and applied to the bi-objective portfolio optimization problem and compared with other state-of-the-art techniques. Experimental results show that the performance is improved by this hybridization in terms of solution eminence and strong convergence.


2019 ◽  
Vol 99 (1) ◽  
pp. 91-94
Author(s):  
A. V. Gasnikov ◽  
E. A. Gorbunov ◽  
D. A. Kovalev ◽  
A. A. M. Mokhammed ◽  
E. O. Chernousova

2019 ◽  
Vol 63 (4) ◽  
pp. 726-737
Author(s):  
Azita Mayeli

AbstractIn this paper, we introduce a class of nonsmooth nonconvex optimization problems, and we propose to use a local iterative minimization-majorization (MM) algorithm to find an optimal solution for the optimization problem. The cost functions in our optimization problems are an extension of convex functions with MC separable penalty, which were previously introduced by Ivan Selesnick. These functions are not convex; therefore, convex optimization methods cannot be applied here to prove the existence of optimal minimum point for these functions. For our purpose, we use convex analysis tools to first construct a class of convex majorizers, which approximate the value of non-convex cost function locally, then use the MM algorithm to prove the existence of local minimum. The convergence of the algorithm is guaranteed when the iterative points $x^{(k)}$ are obtained in a ball centred at $x^{(k-1)}$ with small radius. We prove that the algorithm converges to a stationary point (local minimum) of cost function when the surregators are strongly convex.


Symmetry ◽  
2021 ◽  
Vol 13 (2) ◽  
pp. 182
Author(s):  
Kanikar Muangchoo ◽  
Nasser Aedh Alreshidi ◽  
Ioannis K. Argyros

In this paper, we introduce two novel extragradient-like methods to solve variational inequalities in a real Hilbert space. The variational inequality problem is a general mathematical problem in the sense that it unifies several mathematical models, such as optimization problems, Nash equilibrium models, fixed point problems, and saddle point problems. The designed methods are analogous to the two-step extragradient method that is used to solve variational inequality problems in real Hilbert spaces that have been previously established. The proposed iterative methods use a specific type of step size rule based on local operator information rather than its Lipschitz constant or any other line search procedure. Under mild conditions, such as the Lipschitz continuity and monotonicity of a bi-function (including pseudo-monotonicity), strong convergence results of the described methods are established. Finally, we provide many numerical experiments to demonstrate the performance and superiority of the designed methods.


10.5772/6235 ◽  
2008 ◽  
Vol 5 (4) ◽  
pp. 39 ◽  
Author(s):  
Bui Trung Thanh ◽  
Manukid Parnichkun

In this paper, a structure-specified mixed H2/H∞ controller design using particle swarm optimization (PSO) is proposed for control balancing of Bicyrobo, which is an unstable system associated with many sources of uncertainties due to un-model dynamics, parameter variations, and external disturbances. The structure-specified mixed H2/H∞ control is a robust and optimal control technique. However, the design process normally comes up with a complex and non-convex optimization problem which is difficult to solve by the conventional optimization methods. PSO is a recently useful meta-heuristic search method used to solve multi-objective and non-convex optimization problems. In the method, PSO is used to search for parameters of a structure-specified controller which satisfies mixed H2/H∞ performance index. The simulation and experimental results show the robustness of the proposed controller in compared with the conventional proportional plus derivative (PD) controller, and the efficiency of the proposed algorithm in compared with the genetic algorithm (GA).


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Nazarii Tupitsa ◽  
Pavel Dvurechensky ◽  
Alexander Gasnikov ◽  
Sergey Guminov

Abstract We consider alternating minimization procedures for convex and non-convex optimization problems with the vector of variables divided into several blocks, each block being amenable for minimization with respect to its variables while maintaining other variables blocks constant. In the case of two blocks, we prove a linear convergence rate for an alternating minimization procedure under the Polyak–Łojasiewicz (PL) condition, which can be seen as a relaxation of the strong convexity assumption. Under the strong convexity assumption in the many-blocks setting, we provide an accelerated alternating minimization procedure with linear convergence rate depending on the square root of the condition number as opposed to just the condition number for the non-accelerated method. We also consider the problem of finding an approximate non-negative solution to a linear system of equations A ⁢ x = y {Ax=y} with alternating minimization of Kullback–Leibler (KL) divergence between Ax and y.


Sign in / Sign up

Export Citation Format

Share Document