scholarly journals Recovering a perturbation of a matrix polynomial from a perturbation of its first companion linearization

Author(s):  
Andrii Dmytryshyn

AbstractA number of theoretical and computational problems for matrix polynomials are solved by passing to linearizations. Therefore a perturbation theory, that relates perturbations in the linearization to equivalent perturbations in the corresponding matrix polynomial, is needed. In this paper we develop an algorithm that finds which perturbation of matrix coefficients of a matrix polynomial corresponds to a given perturbation of the entire linearization pencil. Moreover we find transformation matrices that, via strict equivalence, transform a perturbation of the linearization to the linearization of a perturbed polynomial. For simplicity, we present the results for the first companion linearization but they can be generalized to a broader class of linearizations.

2021 ◽  
Vol 71 (2) ◽  
pp. 301-316
Author(s):  
Reshma Sanjhira

Abstract We propose a matrix analogue of a general inverse series relation with an objective to introduce the generalized Humbert matrix polynomial, Wilson matrix polynomial, and the Rach matrix polynomial together with their inverse series representations. The matrix polynomials of Kiney, Pincherle, Gegenbauer, Hahn, Meixner-Pollaczek etc. occur as the special cases. It is also shown that the general inverse matrix pair provides the extension to several inverse pairs due to John Riordan [An Introduction to Combinatorial Identities, Wiley, 1968].


1990 ◽  
Vol 33 (3) ◽  
pp. 337-366 ◽  
Author(s):  
Harry Dym ◽  
Nicholas Young

Let N(λ) be a square matrix polynomial, and suppose det N is a polynomial of degree d. Subject to a certain non-singularity condition we construct a d by d Hermitian matrix whose signature determines the numbers of zeros of N inside and outside the unit circle. The result generalises a well known theorem of Schur and Cohn for scalar polynomials. The Hermitian “test matrix” is obtained as the inverse of the Gram matrix of a natural basis in a certain Krein space of rational vector functions associated with N. More complete results in a somewhat different formulation have been obtained by Lerer and Tismenetsky by other methods.


Author(s):  
Nikta Shayanfar ◽  
Heike Fassbender

The polynomial eigenvalue problem is to find the eigenpair of $(\lambda,x) \in \mathbb{C}\bigcup \{\infty\} \times \mathbb{C}^n \backslash \{0\}$ that satisfies $P(\lambda)x=0$, where $P(\lambda)=\sum_{i=0}^s P_i \lambda ^i$ is an $n\times n$ so-called matrix polynomial of degree $s$, where the coefficients $P_i, i=0,\cdots,s$, are $n\times n$ constant matrices, and $P_s$ is supposed to be nonzero. These eigenvalue problems arise from a variety of physical applications including acoustic structural coupled systems, fluid mechanics, multiple input multiple output systems in control theory, signal processing, and constrained least square problems. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Such methods convert the eigenvalue problem into a well-studied linear eigenvalue problem, and meanwhile, exploit and preserve the structure and properties of the original eigenvalue problem. The linearizations have been extensively studied with respect to the basis that the matrix polynomial is expressed in. If the matrix polynomial is expressed in a special basis, then it is desirable that its linearization be also expressed in the same basis. The reason is due to the fact that changing the given basis ought to be avoided \cite{H1}. The authors in \cite{ACL} have constructed linearization for different bases such as degree-graded ones (including monomial, Newton and Pochhammer basis), Bernstein and Lagrange basis. This contribution is concerned with polynomial eigenvalue problems in which the matrix polynomial is expressed in Hermite basis. In fact, Hermite basis is used for presenting matrix polynomials designed for matching a series of points and function derivatives at the prescribed nodes. In the literature, the linearizations of matrix polynomials of degree $s$, expressed in Hermite basis, consist of matrix pencils with $s+2$ blocks of size $n \times n$. In other words, additional eigenvalues at infinity had to be introduced, see e.g. \cite{CSAG}. In this research, we try to overcome this difficulty by reducing the size of linearization. The reduction scheme presented will gradually reduce the linearization to its minimal size making use of ideas from \cite{VMM1}. More precisely, for $n \times n$ matrix polynomials of degree $s$, we present linearizations of smaller size, consisting of $s+1$ and $s$ blocks of $n \times n$ matrices. The structure of the eigenvectors is also discussed.


2021 ◽  
Vol 37 ◽  
pp. 640-658
Author(s):  
Eunice Y.S. Chan ◽  
Robert M. Corless ◽  
Leili Rafiee Sevyeri

We define generalized standard triples $\boldsymbol{X}$, $\boldsymbol{Y}$, and $L(z) = z\boldsymbol{C}_{1} - \boldsymbol{C}_{0}$, where $L(z)$ is a linearization of a regular matrix polynomial $\boldsymbol{P}(z) \in \mathbb{C}^{n \times n}[z]$, in order to use the representation $\boldsymbol{X}(z \boldsymbol{C}_{1}~-~\boldsymbol{C}_{0})^{-1}\boldsymbol{Y}~=~\boldsymbol{P}^{-1}(z)$ which holds except when $z$ is an eigenvalue of $\boldsymbol{P}$. This representation can be used in constructing so-called  algebraic linearizations for matrix polynomials of the form $\boldsymbol{H}(z) = z \boldsymbol{A}(z)\boldsymbol{B}(z) + \boldsymbol{C} \in \mathbb{C}^{n \times n}[z]$ from generalized standard triples of $\boldsymbol{A}(z)$ and $\boldsymbol{B}(z)$. This can be done even if $\boldsymbol{A}(z)$ and $\boldsymbol{B}(z)$ are expressed in differing polynomial bases. Our main theorem is that $\boldsymbol{X}$ can be expressed using the coefficients of the expression $1 = \sum_{k=0}^\ell e_k \phi_k(z)$ in terms of the relevant polynomial basis. For convenience, we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases.


2019 ◽  
Vol 35 ◽  
pp. 116-155
Author(s):  
Biswajit Das ◽  
Shreemayee Bora

The complete eigenvalue problem associated with a rectangular matrix polynomial is typically solved via the technique of linearization. This work introduces the concept of generalized linearizations of rectangular matrix polynomials. For a given rectangular matrix polynomial, it also proposes vector spaces of rectangular matrix pencils with the property that almost every pencil is a generalized linearization of the matrix polynomial which can then be used to solve the complete eigenvalue problem associated with the polynomial. The properties of these vector spaces are similar to those introduced in the literature for square matrix polynomials and in fact coincide with them when the matrix polynomial is square. Further, almost every pencil in these spaces can be `trimmed' to form many smaller pencils that are strong linearizations of the matrix polynomial which readily yield solutions of the complete eigenvalue problem for the polynomial. These linearizations are easier to construct and are often smaller than the Fiedler linearizations introduced in the literature for rectangular matrix polynomials. Additionally, a global backward error analysis applied to these linearizations shows that they provide a wide choice of linearizations with respect to which the complete polynomial eigenvalue problem can be solved in a globally backward stable manner.


2021 ◽  
Vol 37 ◽  
pp. 211-246
Author(s):  
Peter Lancaster ◽  
Ion Zaballa

Many physical problems require the spectral analysis of quadratic matrix polynomials $M\lambda^2+D\lambda +K$, $\lambda \in \mathbb{C}$, with $n \times n$ Hermitian matrix coefficients, $M,\;D,\;K$. In this largely expository paper, we present and discuss canonical forms for these polynomials under the action of both congruence and similarity transformations of a linearization and also $\lambda$-dependent unitary similarity transformations of the polynomial itself. Canonical structures for these processes are clarified, with no restrictions on eigenvalue multiplicities. Thus, we bring together two lines of attack: (a) analytic via direct reduction of the $n \times n$ system itself by $\lambda$-dependent unitary similarity and (b) algebraic via reduction of $2n \times 2n$ symmetric linearizations of the system by either congruence (Section 4) or similarity (Sections 5 and 6) transformations which are independent of the parameter $\lambda$. Some new results are brought to light in the process. Complete descriptions of associated canonical structures (over $\mathbb{R}$ and over $\mathbb{C}$) are provided -- including the two cases of real symmetric coefficients and complex Hermitian coefficients. These canonical structures include the so-called sign characteristic. This notion appears in the literature with different meanings depending on the choice of canonical form. These sign characteristics are studied here and connections between them are clarified. In particular, we consider which of the linearizations reproduce the (intrinsic) signs associated with the analytic (Rellich) theory (Sections 7 and 9).


2018 ◽  
Vol 34 ◽  
pp. 472-499 ◽  
Author(s):  
M. I. Bueno ◽  
Madeline Martin ◽  
Javier Perez ◽  
Alexander Song ◽  
Irina Viviano

In the last decade, there has been a continued effort to produce families of strong linearizations of a matrix polynomial $P(\lambda)$, regular and singular, with good properties, such as, being companion forms, allowing the recovery of eigenvectors of a regular $P(\lambda)$ in an easy way, allowing the computation of the minimal indices of a singular $P(\lambda)$ in an easy way, etc. As a consequence of this research, families such as the family of Fiedler pencils, the family of generalized Fiedler pencils (GFP), the family of Fiedler pencils with repetition, and the family of generalized Fiedler pencils with repetition (GFPR) were constructed. In particular, one of the goals was to find in these families structured linearizations of structured matrix polynomials. For example, if a matrix polynomial $P(\lambda)$ is symmetric (Hermitian), it is convenient to use linearizations of $P(\lambda)$ that are also symmetric (Hermitian). Both the family of GFP and the family of GFPR contain block-symmetric linearizations of $P(\lambda)$, which are symmetric (Hermitian) when $P(\lambda)$ is. Now the objective is to determine which of those structured linearizations have the best numerical properties. The main obstacle for this study is the fact that these pencils are defined implicitly as products of so-called elementary matrices. Recent papers in the literature had as a goal to provide an explicit block-structure for the pencils belonging to the family of Fiedler pencils and any of its further generalizations to solve this problem. In particular, it was shown that all GFP and GFPR, after permuting some block-rows and block-columns, belong to the family of extended block Kronecker pencils, which are defined explicitly in terms of their block-structure. Unfortunately, those permutations that transform a GFP or a GFPR into an extended block Kronecker pencil do not preserve the block-symmetric structure. Thus, in this paper, the family of block-minimal bases pencils, which is closely related to the family of extended block Kronecker pencils, and whose pencils are also defined in terms of their block-structure, is considered as a source of canonical forms for block-symmetric pencils. More precisely, four families of block-symmetric pencils which, under some generic nonsingularity conditions are block minimal bases pencils and strong linearizations of a matrix polynomial, are presented. It is shown that the block-symmetric GFP and GFPR, after some row and column permutations, belong to the union of these four families. Furthermore, it is shown that, when $P(\lambda)$ is a complex matrix polynomial, any block-symmetric GFP and GFPR is permutationally congruent to a pencil in some of these four families. Hence, these four families of pencils provide an alternative but explicit approach to the block-symmetric Fiedler-like pencils existing in the literature.


2016 ◽  
Vol 31 ◽  
pp. 71-86
Author(s):  
E. Kokabifar ◽  
G.B. Loghmani ◽  
Panayiotis Psarrakos

Consider an$n\times n matrix polynomial P(\lambda). An upper bound for a spectral norm distance from P(\lambda) to the set of n \times n matrix polynomials that have a given scalar μ in C as a multiple eigenvalue was obtained by Papathanasiou and Psarrakos (2008). This paper concerns a refinement of this result for the case of weakly normal matrix polynomials. A modified method is developed and its efficiency is verified by two illustrative examples. The proposed methodology can also be applied to general matrix polynomials.


Algorithms ◽  
2019 ◽  
Vol 12 (9) ◽  
pp. 187 ◽  
Author(s):  
Hristo N. Djidjev ◽  
Georg Hahn ◽  
Susan M. Mniszewski ◽  
Christian F. A. Negre ◽  
Anders M. N. Niklasson

The simulation of the physical movement of multi-body systems at an atomistic level, with forces calculated from a quantum mechanical description of the electrons, motivates a graph partitioning problem studied in this article. Several advanced algorithms relying on evaluations of matrix polynomials have been published in the literature for such simulations. We aim to use a special type of graph partitioning to efficiently parallelize these computations. For this, we create a graph representing the zero–nonzero structure of a thresholded density matrix, and partition that graph into several components. Each separate submatrix (corresponding to each subgraph) is then substituted into the matrix polynomial, and the result for the full matrix polynomial is reassembled at the end from the individual polynomials. This paper starts by introducing a rigorous definition as well as a mathematical justification of this partitioning problem. We assess the performance of several methods to compute graph partitions with respect to both the quality of the partitioning and their runtime.


2015 ◽  
Vol 30 ◽  
pp. 585-591 ◽  
Author(s):  
Thomas Cameron

It is well known that the eigenvalues of any unitary matrix lie on the unit circle. The purpose of this paper is to prove that the eigenvalues of any matrix polynomial, with unitary coefficients, lie inside the annulus A_{1/2,2) := {z ∈ C | 1/2 < |z| < 2}. The foundations of this result rely on an operator version of Rouche’s theorem and the intermediate value theorem.


Sign in / Sign up

Export Citation Format

Share Document