III.—The Eigenvalue Problem for Boolean Matrices

Author(s):  
D. E. Rutherford

SynopsisIn the case of Boolean matrices a given eigenvector may have a variety of eigenvalues. These eigenvalues form a sublattice of the basic Boolean algebra and the structure of this sublattice is investigated. Likewise a given eigenvalue has a variety of eigenvectors which form a module of the Boolean vector space. The structure of this module is examined. It is also shown that if a vector has a unique eigenvalue λ, then λ satisfies the characteristic equation of the matrix.

2010 ◽  
Vol 37 ◽  
pp. 141-188 ◽  
Author(s):  
P. D. Turney ◽  
P. Pantel

Computers understand very little of the meaning of human language. This profoundly limits our ability to give instructions to computers, the ability of computers to explain their actions to us, and the ability of computers to analyse and process text. Vector space models (VSMs) of semantics are beginning to address these limits. This paper surveys the use of VSMs for semantic processing of text. We organize the literature on VSMs according to the structure of the matrix in a VSM. There are currently three broad classes of VSMs, based on term-document, word-context, and pair-pattern matrices, yielding three classes of applications. We survey a broad range of applications in these three categories and we take a detailed look at a specific open source project in each category. Our goal in this survey is to show the breadth of applications of VSMs for semantics, to provide a new perspective on VSMs for those who are already familiar with the area, and to provide pointers into the literature for those who are less familiar with the field.


Author(s):  
Nikta Shayanfar ◽  
Heike Fassbender

The polynomial eigenvalue problem is to find the eigenpair of $(\lambda,x) \in \mathbb{C}\bigcup \{\infty\} \times \mathbb{C}^n \backslash \{0\}$ that satisfies $P(\lambda)x=0$, where $P(\lambda)=\sum_{i=0}^s P_i \lambda ^i$ is an $n\times n$ so-called matrix polynomial of degree $s$, where the coefficients $P_i, i=0,\cdots,s$, are $n\times n$ constant matrices, and $P_s$ is supposed to be nonzero. These eigenvalue problems arise from a variety of physical applications including acoustic structural coupled systems, fluid mechanics, multiple input multiple output systems in control theory, signal processing, and constrained least square problems. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Such methods convert the eigenvalue problem into a well-studied linear eigenvalue problem, and meanwhile, exploit and preserve the structure and properties of the original eigenvalue problem. The linearizations have been extensively studied with respect to the basis that the matrix polynomial is expressed in. If the matrix polynomial is expressed in a special basis, then it is desirable that its linearization be also expressed in the same basis. The reason is due to the fact that changing the given basis ought to be avoided \cite{H1}. The authors in \cite{ACL} have constructed linearization for different bases such as degree-graded ones (including monomial, Newton and Pochhammer basis), Bernstein and Lagrange basis. This contribution is concerned with polynomial eigenvalue problems in which the matrix polynomial is expressed in Hermite basis. In fact, Hermite basis is used for presenting matrix polynomials designed for matching a series of points and function derivatives at the prescribed nodes. In the literature, the linearizations of matrix polynomials of degree $s$, expressed in Hermite basis, consist of matrix pencils with $s+2$ blocks of size $n \times n$. In other words, additional eigenvalues at infinity had to be introduced, see e.g. \cite{CSAG}. In this research, we try to overcome this difficulty by reducing the size of linearization. The reduction scheme presented will gradually reduce the linearization to its minimal size making use of ideas from \cite{VMM1}. More precisely, for $n \times n$ matrix polynomials of degree $s$, we present linearizations of smaller size, consisting of $s+1$ and $s$ blocks of $n \times n$ matrices. The structure of the eigenvectors is also discussed.


1993 ◽  
Vol 114 (1) ◽  
pp. 111-130 ◽  
Author(s):  
A. Sudbery

AbstractWe construct a non-commutative analogue of the algebra of differential forms on the space of endomorphisms of a vector space, given a non-commutative algebra of functions and differential forms on the vector space. The construction yields a differential bialgebra which is a skew product of an algebra of functions and an algebra of differential forms with constant coefficients. We give necessary and sufficient conditions for such an algebra to exist, show that it is uniquely determined by the differential algebra on the vector space, and show that it is a non-commutative superpolynomial algebra in the matrix elements and their differentials (i.e. that it has the same dimensions of homogeneous components as in the classical case).


1983 ◽  
Vol 66 ◽  
pp. 331-341
Author(s):  
M. Knölker ◽  
M. Stix

AbstractThe differential equations describing stellar oscillations are transformed into an algebraic eigenvalue problem. Frequencies of adiabatic oscillations are obtained as the eigenvalues of a banded real symmetric matrix. We employ the Cowling-approximation, i.e. neglect the Eulerian perturbation of the gravitational potential, and, in order to preserve selfadjointness, require that the Eulerian pressure perturbation vanishes at the outer boundary. For a solar model, comparison of first results with results obtained from a Henyey method shows that the matrix method is convenient, accurate, and fast.


2000 ◽  
Vol 15 (20) ◽  
pp. 3221-3235 ◽  
Author(s):  
WOLFGANG LUCHA ◽  
FRANZ F. SCHÖBERL

Besides perturbation theory, which requires the knowledge of the exact unperturbed solution, variational techniques represent the main tool for any investigation of the eigenvalue problem of some semibounded operator H in quantum theory. For a reasonable choice of the employed trial subspace of the domain of H, the lowest eigenvalues of H can be located with acceptable precision whereas the trial-subspace vectors corresponding to these eigenvalues approximate, in general, the exact eigenstates of H with much less accuracy. Accordingly, various measures for the accuracy of approximate eigenstates derived by variational techniques are scrutinized. In particular, the matrix elements of the commutator of the operator H and (suitably chosen) different operators with respect to degenerate approximate eigenstates of H obtained by the variational methods are proposed as new criteria for the accuracy of variational eigenstates. These considerations are applied to that Hamiltonian the eigenvalue problem of which defines the spinless Salpeter equation. This bound-state wave equation may be regarded as the most straightforward relativistic generalization of the usual nonrelativistic Schrödinger formalism, and is frequently used to describe, e.g. spin-averaged mass spectra of bound states of quarks.


2004 ◽  
Vol 03 (04) ◽  
pp. 411-426 ◽  
Author(s):  
PAUL TERWILLIGER ◽  
RAIMUNDAS VIDUNAS

Let K denote a field and let V denote a vector space over K with finite positive dimension. We consider an ordered pair of linear transformations A:V→V and A*:V→V which satisfy the following two properties: (i) There exists a basis for V with respect to which the matrix representing A is irreducible tridiagonal and the matrix representing A* is diagonal. (ii) There exists a basis for V with respect to which the matrix representing A* is irreducible tridiagonal and the matrix representing A is diagonal. We call such a pair a Leonard pair on V. Referring to the above Leonard pair, we show there exists a sequence of scalars β,γ,γ*,ϱ,ϱ*,ω,η,η* taken from K such that both [Formula: see text] The sequence is uniquely determined by the Leonard pair provided the dimension of V is at least 4. The equations above are called the Askey–Wilson relations.


Sign in / Sign up

Export Citation Format

Share Document