scholarly journals Linearization schemes for Hermite matrix polynomials

Author(s):  
Nikta Shayanfar ◽  
Heike Fassbender

The polynomial eigenvalue problem is to find the eigenpair of $(\lambda,x) \in \mathbb{C}\bigcup \{\infty\} \times \mathbb{C}^n \backslash \{0\}$ that satisfies $P(\lambda)x=0$, where $P(\lambda)=\sum_{i=0}^s P_i \lambda ^i$ is an $n\times n$ so-called matrix polynomial of degree $s$, where the coefficients $P_i, i=0,\cdots,s$, are $n\times n$ constant matrices, and $P_s$ is supposed to be nonzero. These eigenvalue problems arise from a variety of physical applications including acoustic structural coupled systems, fluid mechanics, multiple input multiple output systems in control theory, signal processing, and constrained least square problems. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Such methods convert the eigenvalue problem into a well-studied linear eigenvalue problem, and meanwhile, exploit and preserve the structure and properties of the original eigenvalue problem. The linearizations have been extensively studied with respect to the basis that the matrix polynomial is expressed in. If the matrix polynomial is expressed in a special basis, then it is desirable that its linearization be also expressed in the same basis. The reason is due to the fact that changing the given basis ought to be avoided \cite{H1}. The authors in \cite{ACL} have constructed linearization for different bases such as degree-graded ones (including monomial, Newton and Pochhammer basis), Bernstein and Lagrange basis. This contribution is concerned with polynomial eigenvalue problems in which the matrix polynomial is expressed in Hermite basis. In fact, Hermite basis is used for presenting matrix polynomials designed for matching a series of points and function derivatives at the prescribed nodes. In the literature, the linearizations of matrix polynomials of degree $s$, expressed in Hermite basis, consist of matrix pencils with $s+2$ blocks of size $n \times n$. In other words, additional eigenvalues at infinity had to be introduced, see e.g. \cite{CSAG}. In this research, we try to overcome this difficulty by reducing the size of linearization. The reduction scheme presented will gradually reduce the linearization to its minimal size making use of ideas from \cite{VMM1}. More precisely, for $n \times n$ matrix polynomials of degree $s$, we present linearizations of smaller size, consisting of $s+1$ and $s$ blocks of $n \times n$ matrices. The structure of the eigenvectors is also discussed.

2019 ◽  
Vol 35 ◽  
pp. 116-155
Author(s):  
Biswajit Das ◽  
Shreemayee Bora

The complete eigenvalue problem associated with a rectangular matrix polynomial is typically solved via the technique of linearization. This work introduces the concept of generalized linearizations of rectangular matrix polynomials. For a given rectangular matrix polynomial, it also proposes vector spaces of rectangular matrix pencils with the property that almost every pencil is a generalized linearization of the matrix polynomial which can then be used to solve the complete eigenvalue problem associated with the polynomial. The properties of these vector spaces are similar to those introduced in the literature for square matrix polynomials and in fact coincide with them when the matrix polynomial is square. Further, almost every pencil in these spaces can be `trimmed' to form many smaller pencils that are strong linearizations of the matrix polynomial which readily yield solutions of the complete eigenvalue problem for the polynomial. These linearizations are easier to construct and are often smaller than the Fiedler linearizations introduced in the literature for rectangular matrix polynomials. Additionally, a global backward error analysis applied to these linearizations shows that they provide a wide choice of linearizations with respect to which the complete polynomial eigenvalue problem can be solved in a globally backward stable manner.


2020 ◽  
Vol 36 (36) ◽  
pp. 799-833
Author(s):  
Maria Isabel Bueno Cachadina ◽  
Javier Perez ◽  
Anthony Akshar ◽  
Daria Mileeva ◽  
Remy Kassem

One strategy to solve a nonlinear eigenvalue problem $T(\lambda)x=0$ is to solve a polynomial eigenvalue problem (PEP) $P(\lambda)x=0$ that approximates the original problem through interpolation. Then, this PEP is usually solved by linearization. Because of the polynomial approximation techniques, in this context, $P(\lambda)$ is expressed in a non-monomial basis. The bases used with most frequency are the Chebyshev basis, the Newton basis and the Lagrange basis. Although, there exist already a number of linearizations available in the literature for matrix polynomials expressed in these bases, new families of linearizations are introduced because they present the following advantages: 1) they are easy to construct from the matrix coefficients of $P(\lambda)$ when this polynomial is expressed in any of those three bases; 2) their block-structure is given explicitly; 3) it is possible to provide equivalent formulations for all three bases which allows a natural framework for comparison. Also, recovery formulas of eigenvectors (when $P(\lambda)$ is regular) and recovery formulas of minimal bases and minimal indices (when $P(\lambda)$ is singular) are provided. The ultimate goal is to use these families to compare the numerical behavior of the linearizations associated to the same basis (to select the best one) and with the linearizations associated to the other two bases, to provide recommendations on what basis to use in each context. This comparison will appear in a subsequent paper.


2021 ◽  
Vol 71 (2) ◽  
pp. 301-316
Author(s):  
Reshma Sanjhira

Abstract We propose a matrix analogue of a general inverse series relation with an objective to introduce the generalized Humbert matrix polynomial, Wilson matrix polynomial, and the Rach matrix polynomial together with their inverse series representations. The matrix polynomials of Kiney, Pincherle, Gegenbauer, Hahn, Meixner-Pollaczek etc. occur as the special cases. It is also shown that the general inverse matrix pair provides the extension to several inverse pairs due to John Riordan [An Introduction to Combinatorial Identities, Wiley, 1968].


Algorithms ◽  
2019 ◽  
Vol 12 (9) ◽  
pp. 187 ◽  
Author(s):  
Hristo N. Djidjev ◽  
Georg Hahn ◽  
Susan M. Mniszewski ◽  
Christian F. A. Negre ◽  
Anders M. N. Niklasson

The simulation of the physical movement of multi-body systems at an atomistic level, with forces calculated from a quantum mechanical description of the electrons, motivates a graph partitioning problem studied in this article. Several advanced algorithms relying on evaluations of matrix polynomials have been published in the literature for such simulations. We aim to use a special type of graph partitioning to efficiently parallelize these computations. For this, we create a graph representing the zero–nonzero structure of a thresholded density matrix, and partition that graph into several components. Each separate submatrix (corresponding to each subgraph) is then substituted into the matrix polynomial, and the result for the full matrix polynomial is reassembled at the end from the individual polynomials. This paper starts by introducing a rigorous definition as well as a mathematical justification of this partitioning problem. We assess the performance of several methods to compute graph partitions with respect to both the quality of the partitioning and their runtime.


Author(s):  
A. T. Mithun ◽  
M. C. Lineesh

Construction of multiwavelets begins with finding a solution to the multiscaling equation. The solution is known as multiscaling function. Then, a multiwavelet basis is constructed from the multiscaling function. Symmetric multiscaling functions make the wavelet basis symmetric. The existence and properties of the multiscaling function depend on the symbol function. Symbol functions are trigonometric matrix polynomials. A trigonometric matrix polynomial can be constructed from a pair of matrices known as the standard pair. The square matrix in the pair and the matrix polynomial have the same spectrum. Our objective is to find necessary and sufficient conditions on standard pairs for the existence of compactly supported, symmetric multiscaling functions. First, necessary as well as sufficient conditions on the standard pairs for the existence of symbol functions corresponding to compactly supported multiscaling functions are found. Then, the necessary and sufficient conditions on the class of standard pairs, which make the multiscaling function symmetric, are derived. A method to construct symbol function corresponding to a compactly supported, symmetric multiscaling function from an appropriate standard pair is developed.


The objective of this study is to ef-ciently resolve a perturbed symmetric eigen-value problem, without resolving a completelynew eigenvalue problem. When the size of aninitial eigenvalue problem is large, its multipletimes solving for each set of perturbations can becomputationally expensive and undesired. Thistype of problems is frequently encountered inthe dynamic analysis of mechanical structures.This study deals with a perturbed symmetriceigenvalue problem. It propose to develop atechnique that transforms the perturbed sym-metric eigenvalue problem, of a large size, toa symmetric polynomial eigenvalue problem ofa much reduced size. To accomplish this, weonly need the introduced perturbations, the sym-metric positive-de nite matrices representing theunperturbed system and its rst eigensolutions.The originality lies in the structure of the ob-tained formulation, where the contribution of theunknown eignsolutions of the unperturbed sys-tem is included. The e ectiveness of the pro-posed method is illustrated with numerical tests.High quality results, compared to other existingmethods that use exact reanalysis, can be ob-tained in a reduced calculation time, even if theintroduced perturbations are very signi cant.


Mathematics ◽  
2021 ◽  
Vol 9 (14) ◽  
pp. 1600
Author(s):  
Jorge Sastre ◽  
Javier Ibáñez

Recently, two general methods for evaluating matrix polynomials requiring one matrix product less than the Paterson–Stockmeyer method were proposed, where the cost of evaluating a matrix polynomial is given asymptotically by the total number of matrix product evaluations. An analysis of the stability of those methods was given and the methods have been applied to Taylor-based implementations for computing the exponential, the cosine and the hyperbolic tangent matrix functions. Moreover, a particular example for the evaluation of the matrix exponential Taylor approximation of degree 15 requiring four matrix products was given, whereas the maximum polynomial degree available using Paterson–Stockmeyer method with four matrix products is 9. Based on this example, a new family of methods for evaluating matrix polynomials more efficiently than the Paterson–Stockmeyer method was proposed, having the potential to achieve a much higher efficiency, i.e., requiring less matrix products for evaluating a matrix polynomial of certain degree, or increasing the available degree for the same cost. However, the difficulty of these family of methods lies in the calculation of the coefficients involved for the evaluation of general matrix polynomials and approximations. In this paper, we provide a general matrix polynomial evaluation method for evaluating matrix polynomials requiring two matrix products less than the Paterson-Stockmeyer method for degrees higher than 30. Moreover, we provide general methods for evaluating matrix polynomial approximations of degrees 15 and 21 with four and five matrix product evaluations, respectively, whereas the maximum available degrees for the same cost with the Paterson–Stockmeyer method are 9 and 12, respectively. Finally, practical examples for evaluating Taylor approximations of the matrix cosine and the matrix logarithm accurately and efficiently with these new methods are given.


2020 ◽  
Vol 54 (5) ◽  
pp. 1751-1776
Author(s):  
Robert Altmann ◽  
Marine Froidevaux

We consider PDE eigenvalue problems as they occur in two-dimensional photonic crystal modeling. If the permittivity of the material is frequency-dependent, then the eigenvalue problem becomes nonlinear. In the lossless case, linearization techniques allow an equivalent reformulation as an extended but linear and Hermitian eigenvalue problem, which satisfies a Gårding inequality. For this, known iterative schemes for the matrix case such as the inverse power or the Arnoldi method are extended to the infinite-dimensional case. We prove convergence of the inverse power method on operator level and consider its combination with adaptive mesh refinement, leading to substantial computational speed-ups. For more general photonic crystals, which are described by the Drude–Lorentz model, we propose the direct application of a Newton-type iteration. Assuming some a priori knowledge on the eigenpair of interest, we prove local quadratic convergence of the method. Finally, numerical experiments confirm the theoretical findings of the paper.


2021 ◽  
Vol 37 (37) ◽  
pp. 35-71
Author(s):  
Fernando De Terán ◽  
Carla Hernando ◽  
Javier Pérez

In the framework of Polynomial Eigenvalue Problems (PEPs), most of the matrix polynomials arising in applications are structured polynomials (namely, (skew-)symmetric, (skew-)Hermitian, (anti-)palindromic, or alternating). The standard way to solve PEPs is by means of linearizations. The most frequently used linearizations belong to general constructions, valid for all matrix polynomials of a fixed degree, known as  companion linearizations. It is well known, however, that it is not possible to construct companion linearizations that preserve any of the previous structures for matrix polynomials of even degree. This motivates the search for more general companion forms, in particular companion $\ell$-ifications. In this paper, we present, for the first time, a family of (generalized) companion $\ell$-ifications that preserve any of these structures, for matrix polynomials of degree $k=(2d+1)\ell$. We also show how to construct sparse $\ell$-ifications within this family. Finally, we prove that there are no structured companion quadratifications for quartic matrix polynomials.


Author(s):  
Giovanni Barbarino ◽  
Vanni Noferini

We study the empirical spectral distribution (ESD) for complex [Formula: see text] matrix polynomials of degree [Formula: see text] under relatively mild assumptions on the underlying distributions, thus highlighting universality phenomena. In particular, we assume that the entries of each matrix coefficient of the matrix polynomial have mean zero and finite variance, potentially allowing for distinct distributions for entries of distinct coefficients. We derive the almost sure limit of the ESD in two distinct scenarios: (1) [Formula: see text] with [Formula: see text] constant and (2) [Formula: see text] with [Formula: see text] bounded by [Formula: see text] for some [Formula: see text]; the second result additionally requires that the underlying distributions are continuous and uniformly bounded. Our results are universal in the sense that they depend on the choice of the variances and possibly on [Formula: see text] (if it is kept constant), but not on the underlying distributions. The results can be specialized to specific models by fixing the variances, thus obtaining matrix polynomial analogues of results known for special classes of scalar polynomials, such as Kac, Weyl, elliptic and hyperbolic polynomials.


Sign in / Sign up

Export Citation Format

Share Document