scholarly journals On the minimal residual methods for solving diffusion-convection SLAEs

2021 ◽  
Vol 2099 (1) ◽  
pp. 012005
Author(s):  
V P Il’in ◽  
D I Kozlov ◽  
A V Petukhov

Abstract The objective of this research is to develop and to study iterative methods in the Krylov subspaces for solving systems of linear algebraic equations (SLAEs) with non-symmetric sparse matrices of high orders arising in the approximation of multi-dimensional boundary value problems on the unstructured grids. These methods are also relevant in many applications, including diffusion-convection equations. The considered algorithms are based on constructing ATA — orthogonal direction vectors calculated using short recursions and providing global minimization of a residual at each iteration. Methods based on the Lanczos orthogonalization, AT — preconditioned conjugate residuals algorithm, as well as the left Gauss transform for the original SLAEs are implemented. In addition, the efficiency of these iterative processes is investigated when solving algebraic preconditioned systems using an approximate factorization of the original matrix in the Eisenstat modification. The results of a set of computational experiments for various grids and values of convective coefficients are presented, which demonstrate a sufficiently high efficiency of the approaches under consideration.

Author(s):  
Vladimir N. Lutay

The solution of systems of linear algebraic equations, which matrices can be poorly conditioned or singular is considered. As a solution method, the original matrix is decomposed into triangular components by Gauss or Chole-sky with an additional operation, which consists in increasing the small or zero diagonal terms of triangular matrices during the decomposition process. In the first case, the scalar products calculated during decomposition are divided into two positive numbers such that the first is greater than the second, and their sum is equal to the original one. In further operations, the first number replaces the scalar product, as a result of which the value of the diagonal term increases, and the second number is stored and used after the decomposition process is completed to correct the result of calculations. This operation increases the diagonal elements of triangular matrices and prevents the appearance of very small numbers in the Gauss method and a negative root expression in the Cholesky method. If the matrix is singular, then the calculated diagonal element is zero, and an arbitrary positive number is added to it. This allows you to complete the decomposition process and calculate the pseudo-inverse matrix using the Greville method. The results of computational experiments are presented.


2020 ◽  
Vol 28 (2) ◽  
pp. 149-159
Author(s):  
Jiří Kopal ◽  
Miroslav Rozložník ◽  
Miroslav Tůma

AbstractThe problem of solving large-scale systems of linear algebraic equations arises in a wide range of applications. In many cases the preconditioned iterative method is a method of choice. This paper deals with the approximate inverse preconditioning AINV/SAINV based on the incomplete generalized Gram–Schmidt process. This type of the approximate inverse preconditioning has been repeatedly used for matrix diagonalization in computation of electronic structures but approximating inverses is of an interest in parallel computations in general. Our approach uses adaptive dropping of the matrix entries with the control based on the computed intermediate quantities. Strategy has been introduced as a way to solve di cult application problems and it is motivated by recent theoretical results on the loss of orthogonality in the generalized Gram– Schmidt process. Nevertheless, there are more aspects of the approach that need to be better understood. The diagonal pivoting based on a rough estimation of condition numbers of leading principal submatrices can sometimes provide inefficient preconditioners. This short study proposes another type of pivoting, namely the pivoting that exploits incremental condition estimation based on monitoring both direct and inverse factors of the approximate factorization. Such pivoting remains rather cheap and it can provide in many cases more reliable preconditioner. Numerical examples from real-world problems, small enough to enable a full analysis, are used to illustrate the potential gains of the new approach.


2020 ◽  
pp. 208-217
Author(s):  
O.M. Khimich ◽  
◽  
V.A. Sydoruk ◽  
A.N. Nesterenko ◽  
◽  
...  

Systems of nonlinear equations often arise when modeling processes of different nature. These can be both independent problems describing physical processes and also problems arising at the intermediate stage of solving more complex mathematical problems. Usually, these are high-order tasks with the big count of un-knows, that better take into account the local features of the process or the things that are modeled. In addition, more accurate discrete models allow for more accurate solutions. Usually, the matrices of such problems have a sparse structure. Often the structure of sparse matrices is one of next: band, profile, block-diagonal with bordering, etc. In many cases, the matrices of the discrete problems are symmetric and positively defined or half-defined. The solution of systems of nonlinear equations is performed mainly by iterative methods based on the Newton method, which has a high convergence rate (quadratic) near the solution, provided that the initial approximation lies in the area of gravity of the solution. In this case, the method requires, at each iteration, to calculates the Jacobi matrix and to further solving systems of linear algebraic equations. As a consequence, the complexity of one iteration is. Using the parallel computations in the step of the solving of systems of linear algebraic equations greatly accelerates the process of finding the solution of systems of nonlinear equations. In the paper, a new method for solving systems of nonlinear high-order equations with the Jacobi block matrix is proposed. The basis of the new method is to combine the classical algorithm of the Newton method with an efficient small-tile algorithm for solving systems of linear equations with sparse matrices. The times of solving the systems of nonlinear equations of different orders on the nodes of the SKIT supercomputer are given.


Author(s):  
И.А. Климонов ◽  
В.Д. Корнеев ◽  
В.М. Свешников

Статья посвящена ускорению параллельного решения трехмерных краевых задач методом декомпозиции расчетной области на подобласти, сопрягаемые без наложения. Декомпозиция проводится равномерной параллелепипедальной макросеткой. В каждой подобласти и на границе сопряжения (интерфейсе) строятся свои структурированные подсетки. Объединение этих подсеток образует квазиструктурированную сетку, на которой решается поставленная задача. Распараллеливание решения осуществляется при помощи MPI-технологий. Предложен и экспериментально исследован алгоритм ускорения внешнего итерационного процесса по подобластям для решения системы линейных алгебраических уравнений, аппроксимирующих уравнение Пуанкаре-Стеклова на интерфейсе. Проведены серии численных экспериментов на различных квазиструктурированных сетках и при различных параметрах вычислительных алгоритмов, показывающих ускорение вычислений. This paper is devoted to the acceleration of the parallel solution of three-dimensional boundary value problems by the computational domain decomposition method into subdomains that are conjugated without overlapping. The decomposition is performed by a uniform parallelepipedal macrogrid. In each subdomain and on the interface, some structured subgrids are constructed. The union of these subgrids forms a quasi-structured grid on which the problem is solved. The parallelization is carried out using the MPI-technology. We propose and experimentally study the acceleration algorithm for an external iterative process on subdomains to solve a system of linear algebraic equations approximating the Poincare-Steklov equation on the interface. A number of numerical experiments are carried out on various quasi-structured grids and with various parameters of computational algorithms showing the acceleration of computations.


Geophysics ◽  
1987 ◽  
Vol 52 (2) ◽  
pp. 179-185 ◽  
Author(s):  
John A. Scales

Tomographic inversion of seismic traveltime residuals is now an established and widely used technique for imaging the Earth’s interior. This inversion procedure results in large, but sparse, rectangular systems of linear algebraic equations; in practice there may be tens or even hundreds of thousands of simultaneous equations. This paper applies the classic conjugate gradient algorithm of Hestenes and Stiefel to the least‐squares solution of large, sparse systems of traveltime equations. The conjugate gradient method is fast, accurate, and easily adapted to take advantage of the sparsity of the matrix. The techniques necessary for manipulating sparse matrices are outlined in the Appendix. In addition, the results of the conjugate gradient algorithm are compared to results from two of the more widely used tomographic inversion algorithms.


Author(s):  
Я.Л. Гурьева ◽  
В.П. Ильин

Одним из главных препятствий масштабированному распараллеливанию алгебраических методов декомпозиции для решения сверхбольших разреженных систем линейных алгебраических уравнений (СЛАУ) является замедление скорости сходимости аддитивного итерационного алгоритма Шварца в подпространствах Крылова при увеличении количества подобластей. Целью настоящей статьи является сравнительный экспериментальный анализ различных приeмов ускорения итераций: параметризованное пересечение подобластей, использование специальных интерфейсных условий на границах смежных подобластей, а также применение грубосеточной коррекции (агрегации, или редукции) исходной СЛАУ для построения дополнительного предобусловливателя. Распараллеливание алгоритмов осуществляется на двух уровнях программными средствами для распределeнной и общей памяти. Тестовые СЛАУ получаются при помощи конечно-разностных аппроксимаций задачи Дирихле для диффузионно-конвективного уравнения с различными значениями конвективных коэффициентов на последовательности сгущающихся сеток. One of the main obstacles to the scalable parallelization of the algebraic decomposition methods for solving large sparse systems of linear algebraic equations consists in slowing the convergence rate of the additive iterative Schwarz algorithm in the Krylov subspaces when the number of subdomains increases. The aim of this paper is a comparative experimental analysis of various ways to accelerate the iterations: a parametrized intersection of subdomains, the usage of interface conditions at the boundaries of adjacent subdomains, and the application of a coarse grid correction (aggregation, or reduction) for the original linear system to build an additional preconditioner. The parallelization of algorithms is performed on two levels by programming tools for the distributed and shared memory. The benchmark linear systems under study are formed using the finite difference approximations of the Dirichlet problem for the diffusion-convection equation with various values of the convection coefficients and on a sequence of condensing grids.


1980 ◽  
Vol 9 (123) ◽  
Author(s):  
Ole Østerby ◽  
Zahari Zlatev

<p>The mathematical models of many practical problems lead to systems of linear algebraic equations where the coefficient matrix is large and sparse. Typical examples are the solutions of partial differential equations by finite difference or finite element methods but many other applications could be mentioned.</p><p>When there is a large proportion of zeros in the coefficient matrix then it is fairly obvious that we do not want to store all those zeros in the computer, but it might not be quite so obvious how to get around it. We first describe storage techniques which are convenient to use with direct solution methods, and we then show how a very efficient computational scheme can be based on Gaussian elimination and iterative refinement.</p><p>A serious problem in the storage and handling of sparse matrices is the appearance of fill-ins, i.e. new elements which are created in the process of generating zeros below the diagonal. Many of these new elements tend to be smaller than the original matrix elements, and if they are smaller than a quantity which we shall call the drop tolerance we simply ignore them. In this way we may preserve the sparsity quite well but we probably introduce rather large errors in the LU decomposition to the effect that the solution becomes unacceptable. In order to retrieve the accuracy we use iterative refinement and we show theoreticaly and with practical experiments that it is ideal for the purpose.</p><p>Altogether, the combination of Gaussian elimination, a large drop tolerance, and iterative refinement gives a very efficient and competitive computational scheme for sparse problems. For dense matrices iterative refinement will always require more storage and computation time, and the extra accuracy it yields may not be enough to justify it. For sparse problems, however, iterative refinement combined with a large drop tolerance will in most cases give very accurate results and reliable error estimates with less storage and computation time.</p>


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Xiangyu Wang ◽  
Song Cen ◽  
Chenfeng Li

An acceleration technique, termed generalized Neumann expansion (GNE), is presented for evaluating the responses of uncertain systems. The GNE method, which solves stochastic linear algebraic equations arising in stochastic finite element analysis, is easy to implement and is of high efficiency. The convergence condition of the new method is studied, and a rigorous error estimator is proposed to evaluate the upper bound of the relative error of a given GNE solution. It is found that the third-order GNE solution is sufficient to achieve a good accuracy even when the variation of the source stochastic field is relatively high. The relationship between the GNE method, the perturbation method, and the standard Neumann expansion method is also discussed. Based on the links between these three methods, quantitative error estimations for the perturbation method and the standard Neumann method are obtained for the first time in the probability context.


Sign in / Sign up

Export Citation Format

Share Document