scholarly journals Linearizations for Interpolatory Bases - a Comparison: New Families of Linearizations

2020 ◽  
Vol 36 (36) ◽  
pp. 799-833
Author(s):  
Maria Isabel Bueno Cachadina ◽  
Javier Perez ◽  
Anthony Akshar ◽  
Daria Mileeva ◽  
Remy Kassem

One strategy to solve a nonlinear eigenvalue problem $T(\lambda)x=0$ is to solve a polynomial eigenvalue problem (PEP) $P(\lambda)x=0$ that approximates the original problem through interpolation. Then, this PEP is usually solved by linearization. Because of the polynomial approximation techniques, in this context, $P(\lambda)$ is expressed in a non-monomial basis. The bases used with most frequency are the Chebyshev basis, the Newton basis and the Lagrange basis. Although, there exist already a number of linearizations available in the literature for matrix polynomials expressed in these bases, new families of linearizations are introduced because they present the following advantages: 1) they are easy to construct from the matrix coefficients of $P(\lambda)$ when this polynomial is expressed in any of those three bases; 2) their block-structure is given explicitly; 3) it is possible to provide equivalent formulations for all three bases which allows a natural framework for comparison. Also, recovery formulas of eigenvectors (when $P(\lambda)$ is regular) and recovery formulas of minimal bases and minimal indices (when $P(\lambda)$ is singular) are provided. The ultimate goal is to use these families to compare the numerical behavior of the linearizations associated to the same basis (to select the best one) and with the linearizations associated to the other two bases, to provide recommendations on what basis to use in each context. This comparison will appear in a subsequent paper.

Author(s):  
Nikta Shayanfar ◽  
Heike Fassbender

The polynomial eigenvalue problem is to find the eigenpair of $(\lambda,x) \in \mathbb{C}\bigcup \{\infty\} \times \mathbb{C}^n \backslash \{0\}$ that satisfies $P(\lambda)x=0$, where $P(\lambda)=\sum_{i=0}^s P_i \lambda ^i$ is an $n\times n$ so-called matrix polynomial of degree $s$, where the coefficients $P_i, i=0,\cdots,s$, are $n\times n$ constant matrices, and $P_s$ is supposed to be nonzero. These eigenvalue problems arise from a variety of physical applications including acoustic structural coupled systems, fluid mechanics, multiple input multiple output systems in control theory, signal processing, and constrained least square problems. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Such methods convert the eigenvalue problem into a well-studied linear eigenvalue problem, and meanwhile, exploit and preserve the structure and properties of the original eigenvalue problem. The linearizations have been extensively studied with respect to the basis that the matrix polynomial is expressed in. If the matrix polynomial is expressed in a special basis, then it is desirable that its linearization be also expressed in the same basis. The reason is due to the fact that changing the given basis ought to be avoided \cite{H1}. The authors in \cite{ACL} have constructed linearization for different bases such as degree-graded ones (including monomial, Newton and Pochhammer basis), Bernstein and Lagrange basis. This contribution is concerned with polynomial eigenvalue problems in which the matrix polynomial is expressed in Hermite basis. In fact, Hermite basis is used for presenting matrix polynomials designed for matching a series of points and function derivatives at the prescribed nodes. In the literature, the linearizations of matrix polynomials of degree $s$, expressed in Hermite basis, consist of matrix pencils with $s+2$ blocks of size $n \times n$. In other words, additional eigenvalues at infinity had to be introduced, see e.g. \cite{CSAG}. In this research, we try to overcome this difficulty by reducing the size of linearization. The reduction scheme presented will gradually reduce the linearization to its minimal size making use of ideas from \cite{VMM1}. More precisely, for $n \times n$ matrix polynomials of degree $s$, we present linearizations of smaller size, consisting of $s+1$ and $s$ blocks of $n \times n$ matrices. The structure of the eigenvectors is also discussed.


2019 ◽  
Vol 35 ◽  
pp. 116-155
Author(s):  
Biswajit Das ◽  
Shreemayee Bora

The complete eigenvalue problem associated with a rectangular matrix polynomial is typically solved via the technique of linearization. This work introduces the concept of generalized linearizations of rectangular matrix polynomials. For a given rectangular matrix polynomial, it also proposes vector spaces of rectangular matrix pencils with the property that almost every pencil is a generalized linearization of the matrix polynomial which can then be used to solve the complete eigenvalue problem associated with the polynomial. The properties of these vector spaces are similar to those introduced in the literature for square matrix polynomials and in fact coincide with them when the matrix polynomial is square. Further, almost every pencil in these spaces can be `trimmed' to form many smaller pencils that are strong linearizations of the matrix polynomial which readily yield solutions of the complete eigenvalue problem for the polynomial. These linearizations are easier to construct and are often smaller than the Fiedler linearizations introduced in the literature for rectangular matrix polynomials. Additionally, a global backward error analysis applied to these linearizations shows that they provide a wide choice of linearizations with respect to which the complete polynomial eigenvalue problem can be solved in a globally backward stable manner.


Author(s):  
LEVENTE HUNYADI ◽  
ISTVÁN VAJK

We present a model construction method based on a local fitting of polynomial functions to noisy data and building the entire model as a union of regions explained by such polynomial functions. Local fitting is shown to reduce to solving a polynomial eigenvalue problem where the matrix coefficients are data covariance and approximated noise covariance matrices that capture distortion effects by noise. By defining the asymmetric distance between two points as the projection of one onto the function fitted to the neighborhood of the other, we use a best weighted cut method to find a proper partitioning of the entire set of data into feasible regions. Finally, the partitions are refined using a modified version of a k-planes algorithm.


2021 ◽  
Vol 37 (37) ◽  
pp. 35-71
Author(s):  
Fernando De Terán ◽  
Carla Hernando ◽  
Javier Pérez

In the framework of Polynomial Eigenvalue Problems (PEPs), most of the matrix polynomials arising in applications are structured polynomials (namely, (skew-)symmetric, (skew-)Hermitian, (anti-)palindromic, or alternating). The standard way to solve PEPs is by means of linearizations. The most frequently used linearizations belong to general constructions, valid for all matrix polynomials of a fixed degree, known as  companion linearizations. It is well known, however, that it is not possible to construct companion linearizations that preserve any of the previous structures for matrix polynomials of even degree. This motivates the search for more general companion forms, in particular companion $\ell$-ifications. In this paper, we present, for the first time, a family of (generalized) companion $\ell$-ifications that preserve any of these structures, for matrix polynomials of degree $k=(2d+1)\ell$. We also show how to construct sparse $\ell$-ifications within this family. Finally, we prove that there are no structured companion quadratifications for quartic matrix polynomials.


2021 ◽  
Vol 71 (2) ◽  
pp. 301-316
Author(s):  
Reshma Sanjhira

Abstract We propose a matrix analogue of a general inverse series relation with an objective to introduce the generalized Humbert matrix polynomial, Wilson matrix polynomial, and the Rach matrix polynomial together with their inverse series representations. The matrix polynomials of Kiney, Pincherle, Gegenbauer, Hahn, Meixner-Pollaczek etc. occur as the special cases. It is also shown that the general inverse matrix pair provides the extension to several inverse pairs due to John Riordan [An Introduction to Combinatorial Identities, Wiley, 1968].


Sign in / Sign up

Export Citation Format

Share Document