iteration matrix
Recently Published Documents


TOTAL DOCUMENTS

20
(FIVE YEARS 8)

H-INDEX

5
(FIVE YEARS 0)

Author(s):  
Dominik Sobania ◽  
Jonas Schmitt ◽  
Harald Köstler ◽  
Franz Rothlauf

AbstractWe introduce GPLS (Genetic Programming for Linear Systems) as a GP system that finds mathematical expressions defining an iteration matrix. Stationary iterative methods use this iteration matrix to solve a system of linear equations numerically. GPLS aims at finding iteration matrices with a low spectral radius and a high sparsity, since these properties ensure a fast error reduction of the numerical solution method and enable the efficient implementation of the methods on parallel computer architectures. We study GPLS for various types of system matrices and find that it easily outperforms classical approaches like the Gauss–Seidel and Jacobi methods. GPLS not only finds iteration matrices for linear systems with a much lower spectral radius, but also iteration matrices for problems where classical approaches fail. Additionally, solutions found by GPLS for small problem instances show also good performance for larger instances of the same problem.


2021 ◽  
Vol 4 (1) ◽  
pp. 53-61
Author(s):  
KJ Audu ◽  
YA Yahaya ◽  
KR Adeboye ◽  
UY Abubakar

Given any linear stationary iterative methods in the form z^(i+1)=Jz^(i)+f, where J is the iteration matrix, a significant improvements of the iteration matrix will decrease the spectral radius and enhances the rate of convergence of the particular method while solving system of linear equations in the form Az=b. This motivates us to refine the Extended Accelerated Over-Relaxation (EAOR) method called Refinement of Extended Accelerated Over-Relaxation (REAOR) so as to accelerate the convergence rate of the method. In this paper, a refinement of Extended Accelerated Over-Relaxation method that would minimize the spectral radius, when compared to EAOR method, is proposed. The method is a 3-parameter generalization of the refinement of Accelerated Over-Relaxation (RAOR) method, refinement of Successive Over-Relaxation (RSOR) method, refinement of Gauss-Seidel (RGS) method and refinement of Jacobi (RJ) method. We investigated the convergence of the method for weak irreducible diagonally dominant matrix, matrix or matrix and presented some numerical examples to check the performance of the method. The results indicate the superiority of the method over some existing methods.


Author(s):  
S.I. Martynenko ◽  
A.Yu. Varaksin

Results of theoretical analysis of the geometric multigrid algorithms convergence are presented for solving the linear boundary value problems on a two-block grid. In this case, initial domain could be represented as a union of intersecting subdomains, in each of them a structured grid could be constructed generating a hierarchy of coarse grids. Multigrid iteration matrix is obtained using the damped nonsymmetric iterative method as a smoother. The multigrid algorithm contains a new problem-dependent component --- correction interpolation between grid blocks. Smoothing property for the damped nonsymmetric iterative method and convergence of the robust multigrid technique are proved. Estimation of the multigrid iteration matrix norm is obtained (sufficient convergence condition). It is shown that the number of multigrid iterations does not depend on either the step or the number of grid blocks, if interpolation of the correction between grid blocks is sufficiently accurate. Results of computational experiments are presented on solving the three-dimensional Dirichlet boundary value problem for the Poisson equation illustrating the theoretical analysis. Results obtained could be easily generalized to multiblock grids. The work is of interest for developers of highly efficient algorithms for solving the (initial-) boundary value problems describing physical and chemical processes in complex geometry domains


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Adisorn Kittisopaporn ◽  
Pattrawut Chansangiam ◽  
Wicharn Lewkeeratiyutkul

AbstractWe derive an iterative procedure for solving a generalized Sylvester matrix equation $AXB+CXD = E$ A X B + C X D = E , where $A,B,C,D,E$ A , B , C , D , E are conforming rectangular matrices. Our algorithm is based on gradients and hierarchical identification principle. We convert the matrix iteration process to a first-order linear difference vector equation with matrix coefficient. The Banach contraction principle reveals that the sequence of approximated solutions converges to the exact solution for any initial matrix if and only if the convergence factor belongs to an open interval. The contraction principle also gives the convergence rate and the error analysis, governed by the spectral radius of the associated iteration matrix. We obtain the fastest convergence factor so that the spectral radius of the iteration matrix is minimized. In particular, we obtain iterative algorithms for the matrix equation $AXB=C$ A X B = C , the Sylvester equation, and the Kalman–Yakubovich equation. We give numerical experiments of the proposed algorithm to illustrate its applicability, effectiveness, and efficiency.


Symmetry ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 1831
Author(s):  
Nopparut Sasaki ◽  
Pattrawut Chansangiam

We propose a new iterative method for solving a generalized Sylvester matrix equation A1XA2+A3XA4=E with given square matrices A1,A2,A3,A4 and an unknown rectangular matrix X. The method aims to construct a sequence of approximated solutions converging to the exact solution, no matter the initial value is. We decompose the coefficient matrices to be the sum of its diagonal part and others. The recursive formula for the iteration is derived from the gradients of quadratic norm-error functions, together with the hierarchical identification principle. We find equivalent conditions on a convergent factor, relied on eigenvalues of the associated iteration matrix, so that the method is applicable as desired. The convergence rate and error estimation of the method are governed by the spectral norm of the related iteration matrix. Furthermore, we illustrate numerical examples of the proposed method to show its capability and efficacy, compared to recent gradient-based iterative methods.


Symmetry ◽  
2020 ◽  
Vol 12 (10) ◽  
pp. 1732
Author(s):  
Nunthakarn Boonruangkan ◽  
Pattrawut Chansangiam

We introduce a gradient iterative scheme with an optimal convergent factor for solving a generalized Sylvester matrix equation ∑i=1pAiXBi=F, where Ai,Bi and F are conformable rectangular matrices. The iterative scheme is derived from the gradients of the squared norm-errors of the associated subsystems for the equation. The convergence analysis reveals that the sequence of approximated solutions converge to the exact solution for any initial value if and only if the convergent factor is chosen properly in terms of the spectral radius of the associated iteration matrix. We also discuss the convergent rate and error estimations. Moreover, we determine the fastest convergent factor so that the associated iteration matrix has the smallest spectral radius. Furthermore, we provide numerical examples to illustrate the capability and efficiency of this method. Finally, we apply the proposed scheme to discretized equations for boundary value problems involving convection and diffusion.


2020 ◽  
Vol 14 (10) ◽  
pp. 1631-1639
Author(s):  
Lei Ye ◽  
Fusheng Jian ◽  
Yong Yang ◽  
Wei Zhang ◽  
Qiang Yang ◽  
...  

Author(s):  
R. Vigneswaran ◽  
S. Kajanthan

Various iteration schemes are proposed by various authors to solve nonlinear equations arising in the implementation of implicit Runge-Kutta methods. In this paper, a class of s-step non-linear scheme based on projection method is proposed to accelerate the convergence rate of those linear iteration schemes. In this scheme, sequence of numerical solutions is updated after each sub-step is completed. For 2-stage Gauss method, upper bound for the spectral radius of its iteration matrix was obtained in the left half complex plane. This result is extended to 3-stage and 4-stage Gauss methods by transforming the coefficient matrix and the iteration matrix to a block diagonal form. Finally, some numerical experiments are carried out to confirm the obtained theoretical results.


2014 ◽  
Vol 989-994 ◽  
pp. 1790-1793
Author(s):  
Ting Zhou ◽  
Shi Guang Zhang

In this paper, some comparison results between Jacobi and USSOR iteration for solving nonsingular linear systems are presented. It is showed that spectral radius of Jacobi iteration matrix B is less than that of USSOR iterative matrix under some conditions. A numerical example is also given to illustrate our results.


2014 ◽  
Vol 2014 ◽  
pp. 1-5
Author(s):  
Cui-Xia Li ◽  
Su-Hua Li

A class of the iteration method from the double splitting of coefficient matrix for solving the linear system is further investigated. By structuring a new matrix, the iteration matrix of the corresponding double splitting iteration method is presented. On the basis of convergence and comparison theorems for single splittings, we present some new convergence and comparison theorems on spectral radius for splittings of matrices.


Sign in / Sign up

Export Citation Format

Share Document