The package processing of systems of linear algebraic equations with sparse matrices

1978 ◽  
Vol 18 (6) ◽  
pp. 202-211
Author(s):  
A.Yu. Erëmin ◽  
N.Ya. Mar'yashkin
2020 ◽  
pp. 208-217
Author(s):  
O.M. Khimich ◽  
◽  
V.A. Sydoruk ◽  
A.N. Nesterenko ◽  
◽  
...  

Systems of nonlinear equations often arise when modeling processes of different nature. These can be both independent problems describing physical processes and also problems arising at the intermediate stage of solving more complex mathematical problems. Usually, these are high-order tasks with the big count of un-knows, that better take into account the local features of the process or the things that are modeled. In addition, more accurate discrete models allow for more accurate solutions. Usually, the matrices of such problems have a sparse structure. Often the structure of sparse matrices is one of next: band, profile, block-diagonal with bordering, etc. In many cases, the matrices of the discrete problems are symmetric and positively defined or half-defined. The solution of systems of nonlinear equations is performed mainly by iterative methods based on the Newton method, which has a high convergence rate (quadratic) near the solution, provided that the initial approximation lies in the area of gravity of the solution. In this case, the method requires, at each iteration, to calculates the Jacobi matrix and to further solving systems of linear algebraic equations. As a consequence, the complexity of one iteration is. Using the parallel computations in the step of the solving of systems of linear algebraic equations greatly accelerates the process of finding the solution of systems of nonlinear equations. In the paper, a new method for solving systems of nonlinear high-order equations with the Jacobi block matrix is proposed. The basis of the new method is to combine the classical algorithm of the Newton method with an efficient small-tile algorithm for solving systems of linear equations with sparse matrices. The times of solving the systems of nonlinear equations of different orders on the nodes of the SKIT supercomputer are given.


2021 ◽  
Vol 2099 (1) ◽  
pp. 012005
Author(s):  
V P Il’in ◽  
D I Kozlov ◽  
A V Petukhov

Abstract The objective of this research is to develop and to study iterative methods in the Krylov subspaces for solving systems of linear algebraic equations (SLAEs) with non-symmetric sparse matrices of high orders arising in the approximation of multi-dimensional boundary value problems on the unstructured grids. These methods are also relevant in many applications, including diffusion-convection equations. The considered algorithms are based on constructing ATA — orthogonal direction vectors calculated using short recursions and providing global minimization of a residual at each iteration. Methods based on the Lanczos orthogonalization, AT — preconditioned conjugate residuals algorithm, as well as the left Gauss transform for the original SLAEs are implemented. In addition, the efficiency of these iterative processes is investigated when solving algebraic preconditioned systems using an approximate factorization of the original matrix in the Eisenstat modification. The results of a set of computational experiments for various grids and values of convective coefficients are presented, which demonstrate a sufficiently high efficiency of the approaches under consideration.


Geophysics ◽  
1987 ◽  
Vol 52 (2) ◽  
pp. 179-185 ◽  
Author(s):  
John A. Scales

Tomographic inversion of seismic traveltime residuals is now an established and widely used technique for imaging the Earth’s interior. This inversion procedure results in large, but sparse, rectangular systems of linear algebraic equations; in practice there may be tens or even hundreds of thousands of simultaneous equations. This paper applies the classic conjugate gradient algorithm of Hestenes and Stiefel to the least‐squares solution of large, sparse systems of traveltime equations. The conjugate gradient method is fast, accurate, and easily adapted to take advantage of the sparsity of the matrix. The techniques necessary for manipulating sparse matrices are outlined in the Appendix. In addition, the results of the conjugate gradient algorithm are compared to results from two of the more widely used tomographic inversion algorithms.


1980 ◽  
Vol 9 (123) ◽  
Author(s):  
Ole Østerby ◽  
Zahari Zlatev

<p>The mathematical models of many practical problems lead to systems of linear algebraic equations where the coefficient matrix is large and sparse. Typical examples are the solutions of partial differential equations by finite difference or finite element methods but many other applications could be mentioned.</p><p>When there is a large proportion of zeros in the coefficient matrix then it is fairly obvious that we do not want to store all those zeros in the computer, but it might not be quite so obvious how to get around it. We first describe storage techniques which are convenient to use with direct solution methods, and we then show how a very efficient computational scheme can be based on Gaussian elimination and iterative refinement.</p><p>A serious problem in the storage and handling of sparse matrices is the appearance of fill-ins, i.e. new elements which are created in the process of generating zeros below the diagonal. Many of these new elements tend to be smaller than the original matrix elements, and if they are smaller than a quantity which we shall call the drop tolerance we simply ignore them. In this way we may preserve the sparsity quite well but we probably introduce rather large errors in the LU decomposition to the effect that the solution becomes unacceptable. In order to retrieve the accuracy we use iterative refinement and we show theoreticaly and with practical experiments that it is ideal for the purpose.</p><p>Altogether, the combination of Gaussian elimination, a large drop tolerance, and iterative refinement gives a very efficient and competitive computational scheme for sparse problems. For dense matrices iterative refinement will always require more storage and computation time, and the extra accuracy it yields may not be enough to justify it. For sparse problems, however, iterative refinement combined with a large drop tolerance will in most cases give very accurate results and reliable error estimates with less storage and computation time.</p>


2006 ◽  
Vol 6 (3) ◽  
pp. 264-268
Author(s):  
G. Berikelashvili ◽  
G. Karkarashvili

AbstractA method of approximate solution of the linear one-dimensional Fredholm integral equation of the second kind is constructed. With the help of the Steklov averaging operator the integral equation is approximated by a system of linear algebraic equations. On the basis of the approximation used an increased order convergence solution has been obtained.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrey A. Pil’nik ◽  
Andrey A. Chernov ◽  
Damir R. Islamov

AbstractIn this study, we developed a discrete theory of the charge transport in thin dielectric films by trapped electrons or holes, that is applicable both for the case of countable and a large number of traps. It was shown that Shockley–Read–Hall-like transport equations, which describe the 1D transport through dielectric layers, might incorrectly describe the charge flow through ultra-thin layers with a countable number of traps, taking into account the injection from and extraction to electrodes (contacts). A comparison with other theoretical models shows a good agreement. The developed model can be applied to one-, two- and three-dimensional systems. The model, formulated in a system of linear algebraic equations, can be implemented in the computational code using different optimized libraries. We demonstrated that analytical solutions can be found for stationary cases for any trap distribution and for the dynamics of system evolution for special cases. These solutions can be used to test the code and for studying the charge transport properties of thin dielectric films.


2015 ◽  
Vol 4 (3) ◽  
pp. 420 ◽  
Author(s):  
Behrooz Basirat ◽  
Mohammad Amin Shahdadi

<p>The aim of this article is to present an efficient numerical procedure for solving Lane-Emden type equations. We present two practical matrix method for solving Lane-Emden type equations with mixed conditions by Bernstein polynomials operational matrices (BPOMs) on interval [<em>a; b</em>]. This methods transforms Lane-Emden type equations and the given conditions into matrix equation which corresponds to a system of linear algebraic equations. We also give some numerical examples to demonstrate the efficiency and validity of the operational matrices for solving Lane-Emden type equations (LEEs).</p>


Sign in / Sign up

Export Citation Format

Share Document