matrix polynomial
Recently Published Documents


TOTAL DOCUMENTS

172
(FIVE YEARS 35)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
Vol 37 ◽  
pp. 640-658
Author(s):  
Eunice Y.S. Chan ◽  
Robert M. Corless ◽  
Leili Rafiee Sevyeri

We define generalized standard triples $\boldsymbol{X}$, $\boldsymbol{Y}$, and $L(z) = z\boldsymbol{C}_{1} - \boldsymbol{C}_{0}$, where $L(z)$ is a linearization of a regular matrix polynomial $\boldsymbol{P}(z) \in \mathbb{C}^{n \times n}[z]$, in order to use the representation $\boldsymbol{X}(z \boldsymbol{C}_{1}~-~\boldsymbol{C}_{0})^{-1}\boldsymbol{Y}~=~\boldsymbol{P}^{-1}(z)$ which holds except when $z$ is an eigenvalue of $\boldsymbol{P}$. This representation can be used in constructing so-called  algebraic linearizations for matrix polynomials of the form $\boldsymbol{H}(z) = z \boldsymbol{A}(z)\boldsymbol{B}(z) + \boldsymbol{C} \in \mathbb{C}^{n \times n}[z]$ from generalized standard triples of $\boldsymbol{A}(z)$ and $\boldsymbol{B}(z)$. This can be done even if $\boldsymbol{A}(z)$ and $\boldsymbol{B}(z)$ are expressed in differing polynomial bases. Our main theorem is that $\boldsymbol{X}$ can be expressed using the coefficients of the expression $1 = \sum_{k=0}^\ell e_k \phi_k(z)$ in terms of the relevant polynomial basis. For convenience, we tabulate generalized standard triples for orthogonal polynomial bases, the monomial basis, and Newton interpolational bases; for the Bernstein basis; for Lagrange interpolational bases; and for Hermite interpolational bases.


Author(s):  
Giovanni Barbarino ◽  
Vanni Noferini

We study the empirical spectral distribution (ESD) for complex [Formula: see text] matrix polynomials of degree [Formula: see text] under relatively mild assumptions on the underlying distributions, thus highlighting universality phenomena. In particular, we assume that the entries of each matrix coefficient of the matrix polynomial have mean zero and finite variance, potentially allowing for distinct distributions for entries of distinct coefficients. We derive the almost sure limit of the ESD in two distinct scenarios: (1) [Formula: see text] with [Formula: see text] constant and (2) [Formula: see text] with [Formula: see text] bounded by [Formula: see text] for some [Formula: see text]; the second result additionally requires that the underlying distributions are continuous and uniformly bounded. Our results are universal in the sense that they depend on the choice of the variances and possibly on [Formula: see text] (if it is kept constant), but not on the underlying distributions. The results can be specialized to specific models by fixing the variances, thus obtaining matrix polynomial analogues of results known for special classes of scalar polynomials, such as Kac, Weyl, elliptic and hyperbolic polynomials.


Author(s):  
Mykola Nedashkovskyy

A new general approach for solving matrix polynomial equations of arbitrary order with matrix or vector unknowns is proposed in the work with the use of nested continued fractions.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2018
Author(s):  
Javier Ibáñez ◽  
Jorge Sastre ◽  
Pedro Ruiz ◽  
José M. Alonso ◽  
Emilio Defez

The most popular method for computing the matrix logarithm is a combination of the inverse scaling and squaring method in conjunction with a Padé approximation, sometimes accompanied by the Schur decomposition. In this work, we present a Taylor series algorithm, based on the free-transformation approach of the inverse scaling and squaring technique, that uses recent matrix polynomial formulas for evaluating the Taylor approximation of the matrix logarithm more efficiently than the Paterson–Stockmeyer method. Two MATLAB implementations of this algorithm, related to relative forward or backward error analysis, were developed and compared with different state-of-the art MATLAB functions. Numerical tests showed that the new implementations are generally more accurate than the previously available codes, with an intermediate execution time among all the codes in comparison.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Miloud Sadkane

Abstract An inexact variant of inverse subspace iteration is used to find a small invariant pair of a large quadratic matrix polynomial. It is shown that linear convergence is preserved provided the inner iteration is performed with increasing accuracy. A preconditioned block GMRES solver is employed as inner iteration. The preconditioner uses the strategy of “tuning” which prevents the inner iteration from increasing and therefore results in a substantial saving in costs. The accuracy of the computed invariant pair can be improved by the addition of a post-processing step involving very few iterations of Newton’s method. The effectiveness of the proposed approach is demonstrated by numerical experiments.


2021 ◽  
Vol 2021 ◽  
pp. 1-4
Author(s):  
Yunbo Tian ◽  
Chao Xia

We study the low-degree solution of the Sylvester matrix equation A 1 λ + A 0 X λ + Y λ B 1 λ + B 0 = C 0 , where A 1 λ + A 0 and B 1 λ + B 0 are regular. Using the substitution of parameter variables λ , we assume that the matrices A 0 and B 0 are invertible. Thus, we prove that if the equation is solvable, then it has a low-degree solution L λ , M λ , satisfying the degree conditions δ L λ < Ind A 0 − 1 A 1  and  δ M λ < Ind B 1 B 0 − 1 .


2021 ◽  
Vol 25 (22) ◽  
pp. 644-678
Author(s):  
Maxim Gurevich ◽  
Erez Lapid

We construct new “standard modules” for the representations of general linear groups over a local non-archimedean field. The construction uses a modified Robinson–Schensted–Knuth correspondence for Zelevinsky’s multisegments. Typically, the new class categorifies the basis of Doubilet, Rota, and Stein (DRS) for matrix polynomial rings, indexed by bitableaux. Hence, our main result provides a link between the dual canonical basis (coming from quantum groups) and the DRS basis.


Mathematics ◽  
2021 ◽  
Vol 9 (15) ◽  
pp. 1729
Author(s):  
Georgios Katsouleas ◽  
Vasiliki Panagakou ◽  
Panayiotis Psarrakos

In this note, given a matrix A∈Cn×n (or a general matrix polynomial P(z), z∈C) and an arbitrary scalar λ0∈C, we show how to define a sequence μkk∈N which converges to some element of its spectrum. The scalar λ0 serves as initial term (μ0=λ0), while additional terms are constructed through a recursive procedure, exploiting the fact that each term μk of this sequence is in fact a point lying on the boundary curve of some pseudospectral set of A (or P(z)). Then, the next term in the sequence is detected in the direction which is normal to this curve at the point μk. Repeating the construction for additional initial points, it is possible to approximate peripheral eigenvalues, localize the spectrum and even obtain spectral enclosures. Hence, as a by-product of our method, a computationally cheap procedure for approximate pseudospectra computations emerges. An advantage of the proposed approach is that it does not make any assumptions on the location of the spectrum. The fact that all computations are performed on some dynamically chosen locations on the complex plane which converge to the eigenvalues, rather than on a large number of predefined points on a rigid grid, can be used to accelerate conventional grid algorithms. Parallel implementation of the method or use in conjunction with randomization techniques can lead to further computational savings when applied to large-scale matrices.


Mathematics ◽  
2021 ◽  
Vol 9 (14) ◽  
pp. 1600
Author(s):  
Jorge Sastre ◽  
Javier Ibáñez

Recently, two general methods for evaluating matrix polynomials requiring one matrix product less than the Paterson–Stockmeyer method were proposed, where the cost of evaluating a matrix polynomial is given asymptotically by the total number of matrix product evaluations. An analysis of the stability of those methods was given and the methods have been applied to Taylor-based implementations for computing the exponential, the cosine and the hyperbolic tangent matrix functions. Moreover, a particular example for the evaluation of the matrix exponential Taylor approximation of degree 15 requiring four matrix products was given, whereas the maximum polynomial degree available using Paterson–Stockmeyer method with four matrix products is 9. Based on this example, a new family of methods for evaluating matrix polynomials more efficiently than the Paterson–Stockmeyer method was proposed, having the potential to achieve a much higher efficiency, i.e., requiring less matrix products for evaluating a matrix polynomial of certain degree, or increasing the available degree for the same cost. However, the difficulty of these family of methods lies in the calculation of the coefficients involved for the evaluation of general matrix polynomials and approximations. In this paper, we provide a general matrix polynomial evaluation method for evaluating matrix polynomials requiring two matrix products less than the Paterson-Stockmeyer method for degrees higher than 30. Moreover, we provide general methods for evaluating matrix polynomial approximations of degrees 15 and 21 with four and five matrix product evaluations, respectively, whereas the maximum available degrees for the same cost with the Paterson–Stockmeyer method are 9 and 12, respectively. Finally, practical examples for evaluating Taylor approximations of the matrix cosine and the matrix logarithm accurately and efficiently with these new methods are given.


Author(s):  
Andrii Dmytryshyn

AbstractA number of theoretical and computational problems for matrix polynomials are solved by passing to linearizations. Therefore a perturbation theory, that relates perturbations in the linearization to equivalent perturbations in the corresponding matrix polynomial, is needed. In this paper we develop an algorithm that finds which perturbation of matrix coefficients of a matrix polynomial corresponds to a given perturbation of the entire linearization pencil. Moreover we find transformation matrices that, via strict equivalence, transform a perturbation of the linearization to the linearization of a perturbed polynomial. For simplicity, we present the results for the first companion linearization but they can be generalized to a broader class of linearizations.


Sign in / Sign up

Export Citation Format

Share Document