MODELING BY FITTING A UNION OF POLYNOMIAL FUNCTIONS TO DATA IN AN ERRORS-IN-VARIABLES CONTEXT

Author(s):  
LEVENTE HUNYADI ◽  
ISTVÁN VAJK

We present a model construction method based on a local fitting of polynomial functions to noisy data and building the entire model as a union of regions explained by such polynomial functions. Local fitting is shown to reduce to solving a polynomial eigenvalue problem where the matrix coefficients are data covariance and approximated noise covariance matrices that capture distortion effects by noise. By defining the asymmetric distance between two points as the projection of one onto the function fitted to the neighborhood of the other, we use a best weighted cut method to find a proper partitioning of the entire set of data into feasible regions. Finally, the partitions are refined using a modified version of a k-planes algorithm.

2020 ◽  
Vol 36 (36) ◽  
pp. 799-833
Author(s):  
Maria Isabel Bueno Cachadina ◽  
Javier Perez ◽  
Anthony Akshar ◽  
Daria Mileeva ◽  
Remy Kassem

One strategy to solve a nonlinear eigenvalue problem $T(\lambda)x=0$ is to solve a polynomial eigenvalue problem (PEP) $P(\lambda)x=0$ that approximates the original problem through interpolation. Then, this PEP is usually solved by linearization. Because of the polynomial approximation techniques, in this context, $P(\lambda)$ is expressed in a non-monomial basis. The bases used with most frequency are the Chebyshev basis, the Newton basis and the Lagrange basis. Although, there exist already a number of linearizations available in the literature for matrix polynomials expressed in these bases, new families of linearizations are introduced because they present the following advantages: 1) they are easy to construct from the matrix coefficients of $P(\lambda)$ when this polynomial is expressed in any of those three bases; 2) their block-structure is given explicitly; 3) it is possible to provide equivalent formulations for all three bases which allows a natural framework for comparison. Also, recovery formulas of eigenvectors (when $P(\lambda)$ is regular) and recovery formulas of minimal bases and minimal indices (when $P(\lambda)$ is singular) are provided. The ultimate goal is to use these families to compare the numerical behavior of the linearizations associated to the same basis (to select the best one) and with the linearizations associated to the other two bases, to provide recommendations on what basis to use in each context. This comparison will appear in a subsequent paper.


Author(s):  
Nikta Shayanfar ◽  
Heike Fassbender

The polynomial eigenvalue problem is to find the eigenpair of $(\lambda,x) \in \mathbb{C}\bigcup \{\infty\} \times \mathbb{C}^n \backslash \{0\}$ that satisfies $P(\lambda)x=0$, where $P(\lambda)=\sum_{i=0}^s P_i \lambda ^i$ is an $n\times n$ so-called matrix polynomial of degree $s$, where the coefficients $P_i, i=0,\cdots,s$, are $n\times n$ constant matrices, and $P_s$ is supposed to be nonzero. These eigenvalue problems arise from a variety of physical applications including acoustic structural coupled systems, fluid mechanics, multiple input multiple output systems in control theory, signal processing, and constrained least square problems. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Such methods convert the eigenvalue problem into a well-studied linear eigenvalue problem, and meanwhile, exploit and preserve the structure and properties of the original eigenvalue problem. The linearizations have been extensively studied with respect to the basis that the matrix polynomial is expressed in. If the matrix polynomial is expressed in a special basis, then it is desirable that its linearization be also expressed in the same basis. The reason is due to the fact that changing the given basis ought to be avoided \cite{H1}. The authors in \cite{ACL} have constructed linearization for different bases such as degree-graded ones (including monomial, Newton and Pochhammer basis), Bernstein and Lagrange basis. This contribution is concerned with polynomial eigenvalue problems in which the matrix polynomial is expressed in Hermite basis. In fact, Hermite basis is used for presenting matrix polynomials designed for matching a series of points and function derivatives at the prescribed nodes. In the literature, the linearizations of matrix polynomials of degree $s$, expressed in Hermite basis, consist of matrix pencils with $s+2$ blocks of size $n \times n$. In other words, additional eigenvalues at infinity had to be introduced, see e.g. \cite{CSAG}. In this research, we try to overcome this difficulty by reducing the size of linearization. The reduction scheme presented will gradually reduce the linearization to its minimal size making use of ideas from \cite{VMM1}. More precisely, for $n \times n$ matrix polynomials of degree $s$, we present linearizations of smaller size, consisting of $s+1$ and $s$ blocks of $n \times n$ matrices. The structure of the eigenvectors is also discussed.


Author(s):  
Cailu Wang ◽  
Yuegang Tao

This paper proposes the matrix representation of formal polynomials over max-plus algebra and obtains the maximum and minimum canonical forms of a polynomial function by standardizing this representation into a canonical form. A necessary and sufficient condition for two formal polynomials corresponding to the same polynomial function is derived. Such a matrix method is constructive and intuitive, and leads to a polynomial algorithm for factorization of polynomial functions. Some illustrative examples are presented to demonstrate the results.


2015 ◽  
Vol 27 (02) ◽  
pp. 1550004 ◽  
Author(s):  
Andrey Mudrov

Let U be either the classical or quantized universal enveloping algebra of the Lie algebra [Formula: see text] extended over the field of fractions of the Cartan subalgebra. We suggest a PBW basis in U over the extended Cartan subalgebra diagonalizing the contravariant Shapovalov form on generic Verma module. The matrix coefficients of the form are calculated and the inverse form is explicitly constructed.


Symmetry ◽  
2019 ◽  
Vol 11 (9) ◽  
pp. 1077 ◽  
Author(s):  
Negrean ◽  
Crișan

The present paper’s objective is to highlight some new developments of the main author in the field of advanced dynamics of systems and higher order dynamic equations. These equations have been developed on the basis of the matrix exponentials which prove to have undeniable advantages in the matrix study of any complex mechanical system. The present paper proposes some new approaches, based on differential principles from analytical mechanics, by using some important dynamics notions, regarding the acceleration energies of the first, second and third order. This study extended the equations of the higher order, which provide the possibility of applying the initial motion conditions in the positions, velocities and accelerations of the first and second order. In order to determine the time variation laws for the generalized variables, the driving forces and acceleration energies of the higher order are applied by the time polynomial functions of the fifth order. According to inverse kinematics also named control kinematics of the robots, the applications of polynomial functions lead to the kinematic control functions of mechanical motions, especially the transitory motions. They influence the dynamic behavior of multibody systems, in which robot structures are included.


Sign in / Sign up

Export Citation Format

Share Document