Solution of Linear Equations With Coefficients and Right-Hand Members in An Arithmetic Sequence

1977 ◽  
Vol 70 (2) ◽  
pp. 170-172
Author(s):  
Murli M. Gupta

A general solution of a problem in linear algebra.

1966 ◽  
Vol 9 (05) ◽  
pp. 757-801 ◽  
Author(s):  
W. Kahan

The primordial problems of linear algebra are the solution of a system of linear equations and the solution of the eigenvalue problem for the eigenvalues λk, and corresponding eigenvectors of a given matrix A.


2016 ◽  
Vol 8 (2) ◽  
pp. 156
Author(s):  
Marta Graciela Caligaris ◽  
María Elena Schivo ◽  
María Rosa Romiti

In engineering careers, the study of Linear Algebra begins in the first course. Some topics included in this subject are systems of linear equations and vector spaces. Linear Algebra is very useful but can be very abstract for teaching and learning.In an attempt to reduce learning difficulties, different approaches of teaching activities supported by interactive tools were analyzed. This paper presents these tools, designed with GeoGebra for the Algebra and Analytic Geometry course at the Facultad Regional San Nicolás (FRSN), Universidad Tecnológica Nacional (UTN), Argentina.


Author(s):  
A. Myasishchev ◽  
S. Lienkov ◽  
V. Dzhulii ◽  
I. Muliar

Research goals and objectives: the purpose of the article is to study the feasibility of graphics processors using in solving linear equations systems and calculating matrix multiplication as compared with conventional multi-core processors. The peculiarities of the MAGMA and CUBLAS libraries use for various graphics processors are considered. A performance comparison is made between the Tesla C2075 and GeForce GTX 480 GPUs and a six-core AMD processor. Subject of research: the software is developed basing on the MAGMA and CUBLAS libraries for the purpose of the NVIDIA Tesla C2075 and GeForce GTX 480 GPUs performance study for linear equation systems solving and matrix multiplication calculating. Research methods used: libraries were used to parallelize the linear algebra problems solution. For GPUs, these are MAGMA and CUBLAS, for multi-core processors, the ScaLAPACK and ATLAS libraries. To study the operational speed there are used methods and algorithms of computational procedures parallelization similar to these libraries. A software module has been developed for linear equations systems solving and matrix multiplication calculating by parallel systems. Results of the research: it has been determined that for double-precision numbers the GPU GeForce GTX 480 and the GPU Tesla C2075 performance is approximately 3.5 and 6.3 times higher than that of the AMD CPU. And the GPU GeForce GTX 480 performance is 1.3 times higher than the GPU Tesla C2075 performance for single precision numbers. To achieve maximum performance of the NVIDIA CUDA GPU, you need to use the MAGMA or CUBLAS libraries, which accelerate the calculations by about 6.4 times as compared to the traditional programming method. It has been determined that in equations systems solving on a 6-core CPU, it is possible to achieve a maximum acceleration of 3.24 times as compared to calculations on the 1st core using the ScaLAPACK and ATLAS libraries instead of 6-fold theoretical acceleration. Therefore, it is impossible to efficiently use processors with a large number of cores with considered libraries. It is demonstrated that the advantage of the GPU over the CPU increases with the number of equations.


2006 ◽  
Vol 11 (2) ◽  
pp. 123-136 ◽  
Author(s):  
A. G. Akritas ◽  
G. I. Malaschonok ◽  
P. S. Vigklas

Given an m × n matrix A, with m ≥ n, the four subspaces associated with it are shown in Fig. 1 (see [1]). Fig. 1. The row spaces and the nullspaces of A and AT; a1 through an and h1 through hm are abbreviations of the alignerframe and hangerframe vectors respectively (see [2]). The Fundamental Theorem of Linear Algebra tells us that N(A) is the orthogonal complement of R(AT). These four subspaces tell the whole story of the Linear System Ax = y.  So, for example, the absence of N(AT) indicates that a solution always exists, whereas the absence of N(A) indicates that this solution is unique. Given the importance of these subspaces, computing bases for them is the gist of Linear Algebra. In “Classical” Linear Algebra, bases for these subspaces are computed using Gaussian Elimination; they are orthonormalized with the help of the Gram-Schmidt method. Continuing our previous work [3] and following Uhl’s excellent approach [2] we use SVD analysis to compute orthonormal bases for the four subspaces associated with A, and give a 3D explanation. We then state and prove what we call the “SVD-Fundamental Theorem” of Linear Algebra, and apply it in solving systems of linear equations.


2008 ◽  
Vol 49 (1-3) ◽  
pp. 147-160 ◽  
Author(s):  
Håvard Raddum ◽  
Igor Semaev
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document