On the Distance of a Large Toeplitz Band Matrix to the Nearest Singular Matrix

Author(s):  
A. Böttcher ◽  
S. Grudsky ◽  
A. Kozak
Keyword(s):  
Author(s):  
Michele Benzi ◽  
Igor Simunec

AbstractIn this paper we propose a method to compute the solution to the fractional diffusion equation on directed networks, which can be expressed in terms of the graph Laplacian L as a product $$f(L^T) \varvec{b}$$ f ( L T ) b , where f is a non-analytic function involving fractional powers and $$\varvec{b}$$ b is a given vector. The graph Laplacian is a singular matrix, causing Krylov methods for $$f(L^T) \varvec{b}$$ f ( L T ) b to converge more slowly. In order to overcome this difficulty and achieve faster convergence, we use rational Krylov methods applied to a desingularized version of the graph Laplacian, obtained with either a rank-one shift or a projection on a subspace.


2005 ◽  
Author(s):  
S.C. Verma ◽  
K. Nakamura ◽  
K. Naito ◽  
Y. Minami ◽  
M. Sone ◽  
...  

2007 ◽  
Vol 55 (5) ◽  
pp. 417-428 ◽  
Author(s):  
H. Radjavi ◽  
A. R. Sourour

1970 ◽  
Vol 11 (1) ◽  
pp. 81-83 ◽  
Author(s):  
Yik-Hoi Au-Yeung

We denote by F the field R of real numbers, the field C of complex numbers, or the skew field H of real quaternions, and by Fn an n dimensional left vector space over F. If A is a matrix with elements in F, we denote by A* its conjugate transpose. In all three cases of F, an n × n matrix A is said to be hermitian if A = A*, and we say that two n × n hermitian matrices A and B with elements in F can be diagonalized simultaneously if there exists a non singular matrix U with elements in F such that UAU* and UBU* are diagonal matrices. We shall regard a vector u ∈ Fn as a l × n matrix and identify a 1 × 1 matrix with its single element, and we shall denote by diag {A1, …, Am} a diagonal block matrix with the square matrices A1, …, Am lying on its diagonal.


Author(s):  
R. Penrose

This paper describes a generalization of the inverse of a non-singular matrix, as the unique solution of a certain set of equations. This generalized inverse exists for any (possibly rectangular) matrix whatsoever with complex elements. It is used here for solving linear matrix equations, and among other applications for finding an expression for the principal idempotent elements of a matrix. Also a new type of spectral decomposition is given.


1979 ◽  
Vol 31 (2) ◽  
pp. 392-395 ◽  
Author(s):  
J. A. Lester

1. Introduction. Our interest here lies in the following theorem:THEOREM 1. Assume there is defined on Rn (n ≧ 3) a “square-distance” of the formwhere (gij) is a given symmetric non-singular matrix over the reals and x = (x1, …, xn), y = (y1, …, yn) ∈ Rn. Assume further that f is a bijection ofRnwhich preserves a given fixed square-distance ρ, i.e. d(x, y) = ρ if and only if d(ƒ(x),ƒ(y)) = ρ. Then (unless ρ = 0 and (gij) is positive or negative definite) ƒ(x) = Lx + ƒ(0), where L is a linear bijection ofRnsatisfying d(Lx, Ly) = ±d(x, y) for all x, y ∈ Rn (the – sign is possible if and only if ρ = 0 and (gij) has signature 0).


Sign in / Sign up

Export Citation Format

Share Document