scholarly journals Bregman primal–dual first-order method and application to sparse semidefinite programming

Author(s):  
Xin Jiang ◽  
Lieven Vandenberghe

AbstractWe present a new variant of the Chambolle–Pock primal–dual algorithm with Bregman distances, analyze its convergence, and apply it to the centering problem in sparse semidefinite programming. The novelty in the method is a line search procedure for selecting suitable step sizes. The line search obviates the need for estimating the norm of the constraint matrix and the strong convexity constant of the Bregman kernel. As an application, we discuss the centering problem in large-scale semidefinite programming with sparse coefficient matrices. The logarithmic barrier function for the cone of positive semidefinite completable sparse matrices is used as the distance-generating kernel. For this distance, the complexity of evaluating the Bregman proximal operator is shown to be roughly proportional to the cost of a sparse Cholesky factorization. This is much cheaper than the standard proximal operator with Euclidean distances, which requires an eigenvalue decomposition.

2019 ◽  
Vol 20 (2) ◽  
pp. 359
Author(s):  
Manolo Rodriguez Heredia ◽  
Cecilia Orellana Castro ◽  
Aurélio Ribeiro Leitte Oliveira

This study aims to improve the computation of the search direction in the primal-dual Interior Point Method through preconditioned iterative methods. It is about a hybrid approach that combines the Controlled Cholesky Factorization preconditioner and the Splitting preconditioner. This approach has shown good results, however, in these preconditioners there are factors that reduce their efficiency, such as faults on the diagonal when performing the Cholesky factorization, as well as a demand for excessive memory, among others. Thus, some modifications are proposed in these preconditioners, as well as a new phase change, in order toimprove the performance of the hybrid preconditioner. In the Controlled Cholesky Factorization, the parameters that control the filling and the correction of the faults which occur on the diagonal are modified. It considers the relationship between the components from Controlled Cholesky Factorization obtained before and after the fault on the diagonal. In the Splitting preconditioner, in turn, a sparse base is constructed through an appropriate ordering of the columns from constrained matrix optimization problem. In addition, a theoretical result is presented, which shows that, with the proposed ordering, the condition number of the preconditioned Normal Equation matrix with the Splitting preconditioner is uniformly limited by an amount that depends only on the original data of the problem and not on the iteration of the Interior Point Method. Numerical experiments with large scale problems, corroborate the robustness and computational efficiency from this approach.


2019 ◽  
Vol 485 (1) ◽  
pp. 15-18
Author(s):  
S. V. Guminov ◽  
Yu. E. Nesterov ◽  
P. E. Dvurechensky ◽  
A. V. Gasnikov

In this paper a new variant of accelerated gradient descent is proposed. The proposed method does not require any information about the objective function, uses exact line search for the practical accelerations of convergence, converges according to the well-known lower bounds for both convex and non-convex objective functions and possesses primal-dual properties. We also provide a universal version of said method, which converges according to the known lower bounds for both smooth and non-smooth problems.


Author(s):  
Jie Guo ◽  
Zhong Wan

A new spectral three-term conjugate gradient algorithm in virtue of the Quasi-Newton equation is developed for solving large-scale unconstrained optimization problems. It is proved that the search directions in this algorithm always satisfy a sufficiently descent condition independent of any line search. Global convergence is established for general objective functions if the strong Wolfe line search is used. Numerical experiments are employed to show its high numerical performance in solving large-scale optimization problems. Particularly, the developed algorithm is implemented to solve the 100 benchmark test problems from CUTE with different sizes from 1000 to 10,000, in comparison with some similar ones in the literature. The numerical results demonstrate that our algorithm outperforms the state-of-the-art ones in terms of less CPU time, less number of iteration or less number of function evaluation.


2018 ◽  
Vol 63 (4) ◽  
pp. 1045-1058 ◽  
Author(s):  
Sina Khoshfetrat Pakazad ◽  
Anders Hansson ◽  
Martin S. Andersen ◽  
Anders Rantzer

2009 ◽  
Author(s):  
M. Fernanda P. Costa ◽  
Edite M. G. P. Fernandes ◽  
Theodore E. Simos ◽  
George Psihoyios ◽  
Ch. Tsitouras

2011 ◽  
Vol 1 (3) ◽  
pp. 32-46 ◽  
Author(s):  
Minghuang Li ◽  
Fusheng Yu

Building a linear fitting model for a given interval-valued data set is challenging since the minimization of the residue function leads to a huge combinatorial problem. To overcome such a difficulty, this article proposes a new semidefinite programming-based method for implementing linear fitting to interval-valued data. First, the fitting model is cast to a problem of quadratically constrained quadratic programming (QCQP), and then two formulae are derived to develop the lower bound on the optimal value of the nonconvex QCQP by semidefinite relaxation and Lagrangian relaxation. In many cases, this method can solve the fitting problem by giving the exact optimal solution. Even though the lower bound is not the optimal value, it is still a good approximation of the global optimal solution. Experimental studies on different fitting problems of different scales demonstrate the good performance and stability of our method. Furthermore, the proposed method performs very well in solving relatively large-scale interval-fitting problems.


Sign in / Sign up

Export Citation Format

Share Document