scholarly journals Estimates for solutions of systems of linear equations with circulant matrices

2021 ◽  
Vol 2099 (1) ◽  
pp. 012019
Author(s):  
Yu S Volkov ◽  
S I Novikov

Abstract In the present paper we consider the problem to estimate a solution of the system of equations with a circulant matrix in uniform norm. We give the estimate for circulant matrices with diagonal dominance. The estimate is sharp. Based on this result and an idea of decomposition of the matrix into a product of matrices associated with factorization of the characteristic polynomial, we propose an estimate for any circulant matrix.

2021 ◽  
Vol 26 (1) ◽  
pp. 54-59
Author(s):  
Jurijs Lavendels

Abstract The paper considers an iterative method for solving systems of linear equations (SLE), which applies multiple displacement of the approximation solution point in the direction of the final solution, simultaneously reducing the entire residual of the system of equations. The method reduces the requirements for the matrix of SLE. The following SLE property is used: the point is located farther from the system solution result compared to the point projection onto the equation. Developing the approach, the main emphasis is made on reduction of requirements towards the matrix of the system of equations, allowing for higher volume of calculations.


Author(s):  
A. I. Belousov

The main objective of this paper is to prove a theorem according to which a method of successive elimination of unknowns in the solution of systems of linear equations in the semi-rings with iteration gives the really smallest solution of the system. The proof is based on the graph interpretation of the system and establishes a relationship between the method of sequential elimination of unknowns and the method for calculating a cost matrix of a labeled oriented graph using the method of sequential calculation of cost matrices following the paths of increasing ranks. Along with that, and in terms of preparing for the proof of the main theorem, we consider the following important properties of the closed semi-rings and semi-rings with iteration.We prove the properties of an infinite sum (a supremum of the sequence in natural ordering of an idempotent semi-ring). In particular, the proof of the continuity of the addition operation is much simpler than in the known issues, which is the basis for the well-known algorithm for solving a linear equation in a semi-ring with iteration.Next, we prove a theorem on the closeness of semi-rings with iteration with respect to solutions of the systems of linear equations. We also give a detailed proof of the theorem of the cost matrix of an oriented graph labeled above a semi-ring as an iteration of the matrix of arc labels.The concept of an automaton over a semi-ring is introduced, which, unlike the usual labeled oriented graph, has a distinguished "final" vertex with a zero out-degree.All of the foregoing provides a basis for the proof of the main theorem, in which the concept of an automaton over a semi-ring plays the main role.The article's results are scientifically and methodologically valuable. The proposed proof of the main theorem allows us to relate two alternative methods for calculating the cost matrix of a labeled oriented graph, and the proposed proofs of already known statements can be useful in presenting the elements of the theory of semi-rings that plays an important role in mathematical studies of students majoring in software technologies and theoretical computer science.


1858 ◽  
Vol 148 ◽  
pp. 17-37 ◽  

The term matrix might be used in a more general sense, but in the present memoir I consider only square and rectangular matrices, and the term matrix used without qualification is to be understood as meaning a square matrix; in this restricted sense, a set of quantities arranged in the form of a square, e. g . ( a, b, c ) | a', b', c' | | a", b", c" | is said to be a matrix. The notion of such a matrix arises naturally from an abbreviated notation for a set of linear equations, viz. the equations X = ax + by + cz , Y = a'x + b'y + c'z , Z = a"x + b"y + c"z , may be more simply represented by ( X, Y, Z)=( a, b, c )( x, y, z ), | a', b', c' | | a", b", c" | and the consideration of such a system of equations leads to most of the fundamental notions in the theory of matrices. It will be seen that matrices (attending only to those of the same order) comport themselves as single quantities; they may be added, multiplied or compounded together, &c.: the law of the addition, of matrices is precisely similar to that for the addition of ordinary algebraical quantities; as regards their multiplication (or composition), there is the peculiarity that matrices are not in general convertible; it is nevertheless possible to form the powers (positive or negative, integral or fractional) of a matrix, and thence to arrive at the notion of a rational and integral function, or generally of any algebraical function, of a matrix. I obtain the remarkable theorem that any matrix whatever satisfies an algebraical equation of its own order, the coefficient of the highest power being unity, and those of the other powers functions of the terms of the matrix, the last coefficient being in fact the determinant; the rule for the formation of this equation may be stated in the following condensed form, which will be intelligible after a perusal of the memoir, viz. the determinant, formed out of the matrix diminished by the matrix considered as a single quantity involving the matrix unity, will be equal to zero. The theorem shows that every rational and integral function (or indeed every rational function) of a matrix may be considered as a rational and integral function, the degree of which is at most equal to that of the matrix, less unity; it even shows that in a sense, the same is true with respect to any algebraical function whatever of a matrix. One of the applications of the theorem is the finding of the general expression of the matrices which are convertible with a given matrix. The theory of rectangular matrices appears much less important than that of square matrices, and I have not entered into it further than by showing how some of the notions applicable to these may be extended to rectangular matrices.


Author(s):  
Beata Bylina ◽  
Jarosław Bylina

Influence of Preconditioning and Blocking on Accuracy in Solving Markovian ModelsThe article considers the effectiveness of various methods used to solve systems of linear equations (which emerge while modeling computer networks and systems with Markov chains) and the practical influence of the methods applied on accuracy. The paper considers some hybrids of both direct and iterative methods. Two varieties of the Gauss elimination will be considered as an example of direct methods: the LU factorization method and the WZ factorization method. The Gauss-Seidel iterative method will be discussed. The paper also shows preconditioning (with the use of incomplete Gauss elimination) and dividing the matrix into blocks where blocks are solved applying direct methods. The motivation for such hybrids is a very high condition number (which is bad) for coefficient matrices occuring in Markov chains and, thus, slow convergence of traditional iterative methods. Also, the blocking, preconditioning and merging of both are analysed. The paper presents the impact of linked methods on both the time and accuracy of finding vector probability. The results of an experiment are given for two groups of matrices: those derived from some very abstract Markovian models, and those from a general 2D Markov chain.


2015 ◽  
Vol 30 ◽  
pp. 871-888
Author(s):  
Marcos Travaglia

This paper has been motivated by the curiosity that the circulant matrix ${\rm Circ }(1/2, -1/4, 0, \dots, 0,-1/4)$ is the $n\times n$ positive semidefinite, tridiagonal matrix $A$ of smallest Euclidean norm having the property that $Ae = 0$ and $Af = f$, where $e$ and $f$ are, respectively, the vector of all $1$s and the vector of alternating $1$ and $-1$s. It then raises the following question (minimization problem): What should be the matrix $A$ if the tridiagonal restriction is replaced by a general bandwidth $2r + 1$ ($1\leq r \leq \tfrac{n}{2 } -1$)? It is first easily shown that the solution of this problem must still be a circulant matrix. Then the determination of the first row of this circulant matrix consists in solving a least-squares problem having $\tfrac{n}{2} \, - 1$ nonnegative variables (Nonnegative Orthant) subject to $\tfrac{n}{2} - r$ linear equations. Alternatively, this problem can be viewed as the minimization of the norm of an even function vanishing at the points $|i|>r$ of the set $\left\{-\tfrac{n}{2} + 1, \dots, -1, 0, 1, \dots ,\tfrac{n}{2} \right\}$, and whose Fourier-transform is nonnegative, vanishes at zero, and assumes the value one at $\tfrac{n}{2}$. Explicit solutions are given for the special cases of $r=\tfrac{n}{2}$, $r=\tfrac{n}{2} -1$, and $r=2$. The solution for the particular case of $r=2$ can be physically interpreted as the vibrational mode of a ring-like chain of masses and springs in which the springs link both the nearest neighbors (with positive stiffness) and the next-nearest neighbors (with negative stiffness). The paper ends wiih a numerical illustration of the six cases ($1\leq r \leq 6$)corresponding to $n=12$.


1859 ◽  
Vol 9 ◽  
pp. 100-101 ◽  

The term matrix might be used in a more general sense, but in the present memoir I consider only square and rectangular matrices, and the term matrix used without qualification is to be understood as meaning a square matrix ; in this restricted sense, a set of quan­tities arranged in the form of a square, e. g. ( a, b, c ) | a´, b´, c´ a´´, b´´, c´´ | is said to be a matrix. The notation of such a matrix arises naturally from an abbreviated notation for a set of linear equations, viz. the equations X= ax + by + cz Y= a´x + b´y + c´z Z= a´´x + b´´y + c´´z may be more simply represented by (X, Y, Z)=( a, b, c )( x,y,z ) | a´, b´, c´ a´´, b´´, c´´ | and the consideration of such a system of equations leads to most of the fundamental notions in the theory of matrices. It will be seen that matrices (attending only to those of the same degree) com­port themselves as single quantities ; they may be added, multiplied, or compounded together, &c.: the law of the addition of matrices is precisely similar to that for the addition of ordinary algebraical quan­tities ; as regards their multiplication (or composition), there is the peculiarity that matrices are not in general convertible; it is never­theless possibleato form the powers (positive or negative, integral or fractional) of a matrix, and thence to arrive at the notion of a rational and integral function, or generally of any algebraical func­tion of a matrix. I obtain the remarkable theorem that any matrix whatever satisfies an algebraical equation of its own order, the coeffi­cient of the highest power being unity, and those of the other powers functions of the terms of the matrix, the last coefficient being in fact the determinant. The rule for the formation of this equation may be stated in the following condensed form, which will be intelligible after a perusal of the memoir, viz. the determinant, formed out of the matrix diminished by the matrix considered as a single quantity involving the matrix unity, will be equal to zero. The theorem shows that every rational and integral function (or indeed every rational function) of a matrix may be considered as a rational and integral function, the degree of which is at most equal to that of the matrix, less unity ; it even shows that in a sense, the same is true with respect to any algebraical function whatever of a matrix. One of the applications of the theorem is the finding of the general expression of the matrices which are convertible with a given matrix. The theory of rectangular matrices appears much less important than that of square matrices, and I have not entered into it further than by showing how some of the notions applicable to these may be extended to rectangular matrices.


2014 ◽  
Vol 2014 ◽  
pp. 1-9
Author(s):  
Zhaolin Jiang

Block circulant and circulant matrices have already become an ideal research area for solving various differential equations. In this paper, we give the definition and the basic properties of FLSR-factor block circulant (retrocirculant) matrix over fieldF. Fast algorithms for solving systems of linear equations involving these matrices are presented by the fast algorithm for computing matrix polynomials. The unique solution is obtained when such matrix over a fieldFis nonsingular. Fast algorithms for solving the unique solution of the inverse problem ofAX=bin the class of the level-2 FLS(R,r)-circulant(retrocirculant) matrix of type(m,n)over fieldFare given by the right largest common factor of the matrix polynomial. Numerical examples show the effectiveness of the algorithms.


Author(s):  
R. Chen ◽  
A.C. Ward

AbstractInterval arithmetic has been extensively applied to systems of linear equations by the interval matrix arithmetic community. This paper demonstrates through simple examples that some of this work can be viewed as particular instantiations of an abstract “design operation,” the RANGE operation of the Labeled Interval Calculus formalism for inference about sets of possibilities in design. These particular operations promise to solve a variety of design problems that lay beyond the reach of the original Labeled Interval Calculus. However, the abstract view also leads to a new operation, apparently overlooked by the matrix mathematics community, that should also be useful in design; the paper provides an algorithm for computing it.


2010 ◽  
Vol 51 ◽  
Author(s):  
Stasys Rutkauskas ◽  
Igor Saburov

A system of ordinary second order linear equations with a singular point is considered. The aim of this work is such that the system of eigenvectors of the matrix that couples the system of equations is not complete. That implies a matter of the statement of a weighted boundary value problem for this system. The well-posed boundary value problem is proposed in the article. The existence and uniqueness of the solution is proved.


Sign in / Sign up

Export Citation Format

Share Document