scholarly journals Solving Systems of Linear Equations Based on Approximation Solution Projection Analysis

2021 ◽  
Vol 26 (1) ◽  
pp. 54-59
Author(s):  
Jurijs Lavendels

Abstract The paper considers an iterative method for solving systems of linear equations (SLE), which applies multiple displacement of the approximation solution point in the direction of the final solution, simultaneously reducing the entire residual of the system of equations. The method reduces the requirements for the matrix of SLE. The following SLE property is used: the point is located farther from the system solution result compared to the point projection onto the equation. Developing the approach, the main emphasis is made on reduction of requirements towards the matrix of the system of equations, allowing for higher volume of calculations.

2021 ◽  
Vol 2099 (1) ◽  
pp. 012019
Author(s):  
Yu S Volkov ◽  
S I Novikov

Abstract In the present paper we consider the problem to estimate a solution of the system of equations with a circulant matrix in uniform norm. We give the estimate for circulant matrices with diagonal dominance. The estimate is sharp. Based on this result and an idea of decomposition of the matrix into a product of matrices associated with factorization of the characteristic polynomial, we propose an estimate for any circulant matrix.


Author(s):  
A. I. Belousov

The main objective of this paper is to prove a theorem according to which a method of successive elimination of unknowns in the solution of systems of linear equations in the semi-rings with iteration gives the really smallest solution of the system. The proof is based on the graph interpretation of the system and establishes a relationship between the method of sequential elimination of unknowns and the method for calculating a cost matrix of a labeled oriented graph using the method of sequential calculation of cost matrices following the paths of increasing ranks. Along with that, and in terms of preparing for the proof of the main theorem, we consider the following important properties of the closed semi-rings and semi-rings with iteration.We prove the properties of an infinite sum (a supremum of the sequence in natural ordering of an idempotent semi-ring). In particular, the proof of the continuity of the addition operation is much simpler than in the known issues, which is the basis for the well-known algorithm for solving a linear equation in a semi-ring with iteration.Next, we prove a theorem on the closeness of semi-rings with iteration with respect to solutions of the systems of linear equations. We also give a detailed proof of the theorem of the cost matrix of an oriented graph labeled above a semi-ring as an iteration of the matrix of arc labels.The concept of an automaton over a semi-ring is introduced, which, unlike the usual labeled oriented graph, has a distinguished "final" vertex with a zero out-degree.All of the foregoing provides a basis for the proof of the main theorem, in which the concept of an automaton over a semi-ring plays the main role.The article's results are scientifically and methodologically valuable. The proposed proof of the main theorem allows us to relate two alternative methods for calculating the cost matrix of a labeled oriented graph, and the proposed proofs of already known statements can be useful in presenting the elements of the theory of semi-rings that plays an important role in mathematical studies of students majoring in software technologies and theoretical computer science.


1858 ◽  
Vol 148 ◽  
pp. 17-37 ◽  

The term matrix might be used in a more general sense, but in the present memoir I consider only square and rectangular matrices, and the term matrix used without qualification is to be understood as meaning a square matrix; in this restricted sense, a set of quantities arranged in the form of a square, e. g . ( a, b, c ) | a', b', c' | | a", b", c" | is said to be a matrix. The notion of such a matrix arises naturally from an abbreviated notation for a set of linear equations, viz. the equations X = ax + by + cz , Y = a'x + b'y + c'z , Z = a"x + b"y + c"z , may be more simply represented by ( X, Y, Z)=( a, b, c )( x, y, z ), | a', b', c' | | a", b", c" | and the consideration of such a system of equations leads to most of the fundamental notions in the theory of matrices. It will be seen that matrices (attending only to those of the same order) comport themselves as single quantities; they may be added, multiplied or compounded together, &c.: the law of the addition, of matrices is precisely similar to that for the addition of ordinary algebraical quantities; as regards their multiplication (or composition), there is the peculiarity that matrices are not in general convertible; it is nevertheless possible to form the powers (positive or negative, integral or fractional) of a matrix, and thence to arrive at the notion of a rational and integral function, or generally of any algebraical function, of a matrix. I obtain the remarkable theorem that any matrix whatever satisfies an algebraical equation of its own order, the coefficient of the highest power being unity, and those of the other powers functions of the terms of the matrix, the last coefficient being in fact the determinant; the rule for the formation of this equation may be stated in the following condensed form, which will be intelligible after a perusal of the memoir, viz. the determinant, formed out of the matrix diminished by the matrix considered as a single quantity involving the matrix unity, will be equal to zero. The theorem shows that every rational and integral function (or indeed every rational function) of a matrix may be considered as a rational and integral function, the degree of which is at most equal to that of the matrix, less unity; it even shows that in a sense, the same is true with respect to any algebraical function whatever of a matrix. One of the applications of the theorem is the finding of the general expression of the matrices which are convertible with a given matrix. The theory of rectangular matrices appears much less important than that of square matrices, and I have not entered into it further than by showing how some of the notions applicable to these may be extended to rectangular matrices.


Author(s):  
Beata Bylina ◽  
Jarosław Bylina

Influence of Preconditioning and Blocking on Accuracy in Solving Markovian ModelsThe article considers the effectiveness of various methods used to solve systems of linear equations (which emerge while modeling computer networks and systems with Markov chains) and the practical influence of the methods applied on accuracy. The paper considers some hybrids of both direct and iterative methods. Two varieties of the Gauss elimination will be considered as an example of direct methods: the LU factorization method and the WZ factorization method. The Gauss-Seidel iterative method will be discussed. The paper also shows preconditioning (with the use of incomplete Gauss elimination) and dividing the matrix into blocks where blocks are solved applying direct methods. The motivation for such hybrids is a very high condition number (which is bad) for coefficient matrices occuring in Markov chains and, thus, slow convergence of traditional iterative methods. Also, the blocking, preconditioning and merging of both are analysed. The paper presents the impact of linked methods on both the time and accuracy of finding vector probability. The results of an experiment are given for two groups of matrices: those derived from some very abstract Markovian models, and those from a general 2D Markov chain.


1859 ◽  
Vol 9 ◽  
pp. 100-101 ◽  

The term matrix might be used in a more general sense, but in the present memoir I consider only square and rectangular matrices, and the term matrix used without qualification is to be understood as meaning a square matrix ; in this restricted sense, a set of quan­tities arranged in the form of a square, e. g. ( a, b, c ) | a´, b´, c´ a´´, b´´, c´´ | is said to be a matrix. The notation of such a matrix arises naturally from an abbreviated notation for a set of linear equations, viz. the equations X= ax + by + cz Y= a´x + b´y + c´z Z= a´´x + b´´y + c´´z may be more simply represented by (X, Y, Z)=( a, b, c )( x,y,z ) | a´, b´, c´ a´´, b´´, c´´ | and the consideration of such a system of equations leads to most of the fundamental notions in the theory of matrices. It will be seen that matrices (attending only to those of the same degree) com­port themselves as single quantities ; they may be added, multiplied, or compounded together, &c.: the law of the addition of matrices is precisely similar to that for the addition of ordinary algebraical quan­tities ; as regards their multiplication (or composition), there is the peculiarity that matrices are not in general convertible; it is never­theless possibleato form the powers (positive or negative, integral or fractional) of a matrix, and thence to arrive at the notion of a rational and integral function, or generally of any algebraical func­tion of a matrix. I obtain the remarkable theorem that any matrix whatever satisfies an algebraical equation of its own order, the coeffi­cient of the highest power being unity, and those of the other powers functions of the terms of the matrix, the last coefficient being in fact the determinant. The rule for the formation of this equation may be stated in the following condensed form, which will be intelligible after a perusal of the memoir, viz. the determinant, formed out of the matrix diminished by the matrix considered as a single quantity involving the matrix unity, will be equal to zero. The theorem shows that every rational and integral function (or indeed every rational function) of a matrix may be considered as a rational and integral function, the degree of which is at most equal to that of the matrix, less unity ; it even shows that in a sense, the same is true with respect to any algebraical function whatever of a matrix. One of the applications of the theorem is the finding of the general expression of the matrices which are convertible with a given matrix. The theory of rectangular matrices appears much less important than that of square matrices, and I have not entered into it further than by showing how some of the notions applicable to these may be extended to rectangular matrices.


Author(s):  
R. Chen ◽  
A.C. Ward

AbstractInterval arithmetic has been extensively applied to systems of linear equations by the interval matrix arithmetic community. This paper demonstrates through simple examples that some of this work can be viewed as particular instantiations of an abstract “design operation,” the RANGE operation of the Labeled Interval Calculus formalism for inference about sets of possibilities in design. These particular operations promise to solve a variety of design problems that lay beyond the reach of the original Labeled Interval Calculus. However, the abstract view also leads to a new operation, apparently overlooked by the matrix mathematics community, that should also be useful in design; the paper provides an algorithm for computing it.


2010 ◽  
Vol 51 ◽  
Author(s):  
Stasys Rutkauskas ◽  
Igor Saburov

A system of ordinary second order linear equations with a singular point is considered. The aim of this work is such that the system of eigenvectors of the matrix that couples the system of equations is not complete. That implies a matter of the statement of a weighted boundary value problem for this system. The well-posed boundary value problem is proposed in the article. The existence and uniqueness of the solution is proved.


MATEMATIKA ◽  
2018 ◽  
Vol 34 (3) ◽  
pp. 25-32
Author(s):  
Siti Nor Asiah Isa ◽  
Nor’aini Aris ◽  
Shazirawati Mohd Puzi ◽  
Yeak Su Hoe

This paper revisits the comrade matrix approach in finding the greatest common divisor (GCD) of two orthogonal polynomials. The present work investigates on the applications of the QR decomposition with iterative refinement (QRIR) to solve certain systems of linear equations which is generated from the comrade matrix. Besides iterative refinement, an alternative approach of improving the conditioning behavior of the coefficient matrix by normalizing its columns is also considered. As expected the results reveal that QRIR is able to improve the solutions given by QR decomposition while the normalization of the matrix entries do improves the conditioning behavior of the coefficient matrix leading to a good approximate solutions of the GCD.


Sign in / Sign up

Export Citation Format

Share Document