scholarly journals Sum-of-squares chordal decomposition of polynomial matrix inequalities

Author(s):  
Yang Zheng ◽  
Giovanni Fantuzzi

AbstractWe prove decomposition theorems for sparse positive (semi)definite polynomial matrices that can be viewed as sparsity-exploiting versions of the Hilbert–Artin, Reznick, Putinar, and Putinar–Vasilescu Positivstellensätze. First, we establish that a polynomial matrix P(x) with chordal sparsity is positive semidefinite for all $$x\in \mathbb {R}^n$$ x ∈ R n if and only if there exists a sum-of-squares (SOS) polynomial $$\sigma (x)$$ σ ( x ) such that $$\sigma P$$ σ P is a sum of sparse SOS matrices. Second, we show that setting $$\sigma (x)=(x_1^2 + \cdots + x_n^2)^\nu $$ σ ( x ) = ( x 1 2 + ⋯ + x n 2 ) ν for some integer $$\nu $$ ν suffices if P is homogeneous and positive definite globally. Third, we prove that if P is positive definite on a compact semialgebraic set $$\mathcal {K}=\{x:g_1(x)\ge 0,\ldots ,g_m(x)\ge 0\}$$ K = { x : g 1 ( x ) ≥ 0 , … , g m ( x ) ≥ 0 } satisfying the Archimedean condition, then $$P(x) = S_0(x) + g_1(x)S_1(x) + \cdots + g_m(x)S_m(x)$$ P ( x ) = S 0 ( x ) + g 1 ( x ) S 1 ( x ) + ⋯ + g m ( x ) S m ( x ) for matrices $$S_i(x)$$ S i ( x ) that are sums of sparse SOS matrices. Finally, if $$\mathcal {K}$$ K is not compact or does not satisfy the Archimedean condition, we obtain a similar decomposition for $$(x_1^2 + \cdots + x_n^2)^\nu P(x)$$ ( x 1 2 + ⋯ + x n 2 ) ν P ( x ) with some integer $$\nu \ge 0$$ ν ≥ 0 when P and $$g_1,\ldots ,g_m$$ g 1 , … , g m are homogeneous of even degree. Using these results, we find sparse SOS representation theorems for polynomials that are quadratic and correlatively sparse in a subset of variables, and we construct new convergent hierarchies of sparsity-exploiting SOS reformulations for convex optimization problems with large and sparse polynomial matrix inequalities. Numerical examples demonstrate that these hierarchies can have a significantly lower computational complexity than traditional ones.

1997 ◽  
Vol 119 (3) ◽  
pp. 513-520 ◽  
Author(s):  
Tetsuya Iwasaki ◽  
Mario A. Rotea

This paper gives a solution to the scaled ℋ∞ optimization problem with constant scalings and output feedback. It is shown that, when the nominal plant transfer matrix has a special rank-one property, this optimization problem is equivalent to a sequence of convex optimization problems involving linear matrix inequalities. These results are demonstrated with a flight control example. The primary contribution of the example is a method for weight selection that is applicable to problems in which ℋ∞ optimization is used as the design tool.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Darina Dvinskikh ◽  
Alexander Gasnikov

Abstract We introduce primal and dual stochastic gradient oracle methods for decentralized convex optimization problems. Both for primal and dual oracles, the proposed methods are optimal in terms of the number of communication steps. However, for all classes of the objective, the optimality in terms of the number of oracle calls per node takes place only up to a logarithmic factor and the notion of smoothness. By using mini-batching technique, we show that the proposed methods with stochastic oracle can be additionally parallelized at each node. The considered algorithms can be applied to many data science problems and inverse problems.


Author(s):  
Stefano Massei

AbstractVarious applications in numerical linear algebra and computer science are related to selecting the $$r\times r$$ r × r submatrix of maximum volume contained in a given matrix $$A\in \mathbb R^{n\times n}$$ A ∈ R n × n . We propose a new greedy algorithm of cost $$\mathcal O(n)$$ O ( n ) , for the case A symmetric positive semidefinite (SPSD) and we discuss its extension to related optimization problems such as the maximum ratio of volumes. In the second part of the paper we prove that any SPSD matrix admits a cross approximation built on a principal submatrix whose approximation error is bounded by $$(r+1)$$ ( r + 1 ) times the error of the best rank r approximation in the nuclear norm. In the spirit of recent work by Cortinovis and Kressner we derive some deterministic algorithms, which are capable to retrieve a quasi optimal cross approximation with cost $$\mathcal O(n^3)$$ O ( n 3 ) .


2021 ◽  
Vol 15 (6) ◽  
pp. 1-20
Author(s):  
Dongsheng Li ◽  
Haodong Liu ◽  
Chao Chen ◽  
Yingying Zhao ◽  
Stephen M. Chu ◽  
...  

In collaborative filtering (CF) algorithms, the optimal models are usually learned by globally minimizing the empirical risks averaged over all the observed data. However, the global models are often obtained via a performance tradeoff among users/items, i.e., not all users/items are perfectly fitted by the global models due to the hard non-convex optimization problems in CF algorithms. Ensemble learning can address this issue by learning multiple diverse models but usually suffer from efficiency issue on large datasets or complex algorithms. In this article, we keep the intermediate models obtained during global model learning as the snapshot models, and then adaptively combine the snapshot models for individual user-item pairs using a memory network-based method. Empirical studies on three real-world datasets show that the proposed method can extensively and significantly improve the accuracy (up to 15.9% relatively) when applied to a variety of existing collaborative filtering methods.


Author(s):  
T. E. Potter ◽  
K. D. Willmert ◽  
M. Sathyamoorthy

Abstract Mechanism path generation problems which use link deformations to improve the design lead to optimization problems involving a nonlinear sum-of-squares objective function subjected to a set of linear and nonlinear constraints. Inclusion of the deformation analysis causes the objective function evaluation to be computationally expensive. An optimization method is presented which requires relatively few objective function evaluations. The algorithm, based on the Gauss method for unconstrained problems, is developed as an extension of the Gauss constrained technique for linear constraints and revises the Gauss nonlinearly constrained method for quadratic constraints. The derivation of the algorithm, using a Lagrange multiplier approach, is based on the Kuhn-Tucker conditions so that when the iteration process terminates, these conditions are automatically satisfied. Although the technique was developed for mechanism problems, it is applicable to any optimization problem having the form of a sum of squares objective function subjected to nonlinear constraints.


Sign in / Sign up

Export Citation Format

Share Document