scholarly journals Cayley-Hamilton theorem for Drazin inverse matrix and standard inverse matrices

2016 ◽  
Vol 64 (4) ◽  
pp. 793-797
Author(s):  
T. Kaczorek

Abstract The classical Cayley-Hamilton theorem is extended to Drazin inverse matrices and to standard inverse matrices. It is shown that knowing the characteristic polynomial of the singular matrix or nonsingular matrix, it is possible to write the analog Cayley-Hamilton equations for Drazin inverse matrix and for standard inverse matrices.

Mathematics ◽  
2019 ◽  
Vol 8 (1) ◽  
pp. 2
Author(s):  
Santiago Artidiello ◽  
Alicia Cordero ◽  
Juan R. Torregrosa ◽  
María P. Vassileva

A secant-type method is designed for approximating the inverse and some generalized inverses of a complex matrix A. For a nonsingular matrix, the proposed method gives us an approximation of the inverse and, when the matrix is singular, an approximation of the Moore–Penrose inverse and Drazin inverse are obtained. The convergence and the order of convergence is presented in each case. Some numerical tests allowed us to confirm the theoretical results and to compare the performance of our method with other known ones. With these results, the iterative methods with memory appear for the first time for estimating the solution of a nonlinear matrix equations.


1973 ◽  
Vol 16 (1) ◽  
pp. 1-4 ◽  
Author(s):  
M. Ahsanullah ◽  
M. Rahman

Edelblute [1] has given a method of finding an inverse of a nonsingular matrix by rank annihilation. The purpose of this paper is to show that the method can be extended in the case of a singular matrix. This method will produce a singular inverse satisfying condition (3) of Penrose [3].


2021 ◽  
Vol 13 (9) ◽  
pp. 1751
Author(s):  
Bokun Tian ◽  
Xiaoling Zhang ◽  
Liang Li ◽  
Ling Pu ◽  
Liming Pu ◽  
...  

Because of the three-dimensional (3D) imaging scene’s sparsity, compressed sensing (CS) algorithms can be used for linear array synthetic aperture radar (LASAR) 3D sparse imaging. CS algorithms usually achieve high-quality sparse imaging at the expense of computational efficiency. To solve this problem, a fast Bayesian compressed sensing algorithm via relevance vector machine (FBCS–RVM) is proposed in this paper. The proposed method calculates the maximum marginal likelihood function under the framework of the RVM to obtain the optimal hyper-parameters; the scattering units corresponding to the non-zero optimal hyper-parameters are extracted as the target-areas in the imaging scene. Then, based on the target-areas, we simplify the measurement matrix and conduct sparse imaging. In addition, under low signal to noise ratio (SNR), low sampling rate, or high sparsity, the target-areas cannot always be extracted accurately, which probably contain several elements whose scattering coefficients are too small and closer to 0 compared to other elements. Those elements probably make the diagonal matrix singular and irreversible; the scattering coefficients cannot be estimated correctly. To solve this problem, the inverse matrix of the singular matrix is replaced with the generalized inverse matrix obtained by the truncated singular value decomposition (TSVD) algorithm to estimate the scattering coefficients correctly. Based on the rank of the singular matrix, those elements with small scattering coefficients are extracted and eliminated to obtain more accurate target-areas. Both simulation and experimental results show that the proposed method can improve the computational efficiency and imaging quality of LASAR 3D imaging compared with the state-of-the-art CS-based methods.


Author(s):  
Klaus Röbenack ◽  
Kurt Reinschke

On generalized inverses of singular matrix pencilsLinear time-invariant networks are modelled by linear differential-algebraic equations with constant coefficients. These equations can be represented by a matrix pencil. Many publications on this subject are restricted to regular matrix pencils. In particular, the influence of the Weierstrass structure of a regular pencil on the poles of its inverse is well known. In this paper we investigate singular matrix pencils. The relations between the Kronecker structure of a singular matrix pencil and the multiplicity of poles at zero of the Moore-Penrose inverse and the Drazin inverse of the rational matrix are investigated. We present example networks whose circuit equations yield singular matrix pencils.


Filomat ◽  
2021 ◽  
Vol 35 (8) ◽  
pp. 2605-2616
Author(s):  
Daochang Zhang ◽  
Dijana Mosic ◽  
Jianping Hu

Our motivation is to derive the Drazin inverse matrix modification formulae utilizing the Drazin inverses of adequate Peirce corners under some special cases, and the Drazin inverse of a special matrix with an additive perturbation. As applications, several new results for the expressions of the Drazin inverses of modified matrices A ?? CB and A ?? CDdB are obtained, and some well known results in the literature, as the Sherman-Morrison-Woodbury formula and Jacobson?s Lemma, are generalized.


2021 ◽  
Vol 1 (3) ◽  
pp. 403-411
Author(s):  
Ery Nurjayanto ◽  
Amrullah Amrullah ◽  
Arjudin Arjudin ◽  
Sudi Prayitno

The study aims to determine the set of the singular matrix 2×2 that forms the group and describes its properties. The type of research was used exploratory research. Using diagonalization of the singular matrix  S, whereas a generator matrix, pseudo-identity, and pseudo-inverse methods, we obtained a group singular matrix 2×2  with standard multiplication operations on the matrix, with conditions namely:    (1) closed, (2) associative, (3) there was an element of identity, (4) inverse, there was (A)-1 so A x (A)-1 = (A)-1 x A = Is. The group was the abelian group (commutative group). In addition, in the group, Gs satisfied that if Ɐ A, X, Y element Gs was such that A x X = A x Y then X = Y and X x A = Y x A then X = Y. This show that the group can be applied the cancellation properties like the case in nonsingular matrix group. This research provides further research opportunities on the formation of singular matrix groups 3×3 or higher order.


2004 ◽  
Vol 89 (516) ◽  
pp. 378-384 ◽  
Author(s):  
Kerry G. Brock

Just how many matrices have inverses? In elementary linear algebra courses, many of the matrices encountered are singular, but perhaps the reason is that such matrices provide rich and interesting examples. How many of them occur naturally? Many beginning students observe that a singular matrix can be made nonsingular by very minor tweaking – changing just one entry, for example, will make a matrix of rank n – 1 into a full rank (nonsingular) matrix. In fact, changing one entry just a tiny bit will do it. Looking at the question from that point of view, with a little experimentation students begin to discover that singular matrices are quite rare. Entire rows (or columns) have to be rigged exactly right in order to get one, while minor changes in individual entries undo all the work and give us another nonsingular matrix. If we were to reach into a hat full of all the numbers – each equally likely to be chosen – and draw enough to fill in a square matrix randomly, we would certainly expect to get a nonsingular one.


Sign in / Sign up

Export Citation Format

Share Document