scholarly journals Genetic programming for iterative numerical methods

Author(s):  
Dominik Sobania ◽  
Jonas Schmitt ◽  
Harald Köstler ◽  
Franz Rothlauf

AbstractWe introduce GPLS (Genetic Programming for Linear Systems) as a GP system that finds mathematical expressions defining an iteration matrix. Stationary iterative methods use this iteration matrix to solve a system of linear equations numerically. GPLS aims at finding iteration matrices with a low spectral radius and a high sparsity, since these properties ensure a fast error reduction of the numerical solution method and enable the efficient implementation of the methods on parallel computer architectures. We study GPLS for various types of system matrices and find that it easily outperforms classical approaches like the Gauss–Seidel and Jacobi methods. GPLS not only finds iteration matrices for linear systems with a much lower spectral radius, but also iteration matrices for problems where classical approaches fail. Additionally, solutions found by GPLS for small problem instances show also good performance for larger instances of the same problem.

2021 ◽  
Vol 4 (1) ◽  
pp. 53-61
Author(s):  
KJ Audu ◽  
YA Yahaya ◽  
KR Adeboye ◽  
UY Abubakar

Given any linear stationary iterative methods in the form z^(i+1)=Jz^(i)+f, where J is the iteration matrix, a significant improvements of the iteration matrix will decrease the spectral radius and enhances the rate of convergence of the particular method while solving system of linear equations in the form Az=b. This motivates us to refine the Extended Accelerated Over-Relaxation (EAOR) method called Refinement of Extended Accelerated Over-Relaxation (REAOR) so as to accelerate the convergence rate of the method. In this paper, a refinement of Extended Accelerated Over-Relaxation method that would minimize the spectral radius, when compared to EAOR method, is proposed. The method is a 3-parameter generalization of the refinement of Accelerated Over-Relaxation (RAOR) method, refinement of Successive Over-Relaxation (RSOR) method, refinement of Gauss-Seidel (RGS) method and refinement of Jacobi (RJ) method. We investigated the convergence of the method for weak irreducible diagonally dominant matrix, matrix or matrix and presented some numerical examples to check the performance of the method. The results indicate the superiority of the method over some existing methods.


2014 ◽  
Vol 989-994 ◽  
pp. 1790-1793
Author(s):  
Ting Zhou ◽  
Shi Guang Zhang

In this paper, some comparison results between Jacobi and USSOR iteration for solving nonsingular linear systems are presented. It is showed that spectral radius of Jacobi iteration matrix B is less than that of USSOR iterative matrix under some conditions. A numerical example is also given to illustrate our results.


1984 ◽  
Vol 7 (2) ◽  
pp. 361-370 ◽  
Author(s):  
N. M. Missirlis ◽  
D. J. Evans

This paper develops the theory of the Extrapolated Successive Overrelaxation (ESOR) method as introduced by Sisler in [1], [2], [3] for the numerical solution of large sparse linear systems of the formAu=b, whenAis a consistently ordered2-cyclic matrix with non-vanishing diagonal elements and the Jacobi iteration matrixBpossesses only real eigenvalues. The region of convergence for the ESOR method is described and the optimum values of the involved parameters are also determined. It is shown that if the minimum of the moduli of the eigenvalues ofB,μ¯does not vanish, then ESOR attains faster rate of convergence than SOR when1−μ¯2<(1−μ¯2)12, whereμ¯denotes the spectral radius ofB.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Adisorn Kittisopaporn ◽  
Pattrawut Chansangiam ◽  
Wicharn Lewkeeratiyutkul

AbstractWe derive an iterative procedure for solving a generalized Sylvester matrix equation $AXB+CXD = E$ A X B + C X D = E , where $A,B,C,D,E$ A , B , C , D , E are conforming rectangular matrices. Our algorithm is based on gradients and hierarchical identification principle. We convert the matrix iteration process to a first-order linear difference vector equation with matrix coefficient. The Banach contraction principle reveals that the sequence of approximated solutions converges to the exact solution for any initial matrix if and only if the convergence factor belongs to an open interval. The contraction principle also gives the convergence rate and the error analysis, governed by the spectral radius of the associated iteration matrix. We obtain the fastest convergence factor so that the spectral radius of the iteration matrix is minimized. In particular, we obtain iterative algorithms for the matrix equation $AXB=C$ A X B = C , the Sylvester equation, and the Kalman–Yakubovich equation. We give numerical experiments of the proposed algorithm to illustrate its applicability, effectiveness, and efficiency.


2021 ◽  
pp. 107754632110128
Author(s):  
K Renji

Realistic joints in a spacecraft structure have clearances at the interfacing parts. Many such systems can be considered to be having bilinear stiffness. A typical example is the propellant tank assembled with the structure of a spacecraft. However, it is seen that the responses of such systems subjected to base excitation are rarely reported. In this work, mathematical expressions for theoretically estimating the amplitude of its response, the frequency at which the response is the maximum and the maximum response when it is subjected to base sine excitation are derived. Several experiments are conducted on a typical such system subjecting it to different levels of base sine excitation. The frequency at which the response is the maximum reduces with the magnitude of excitation. The expressions derived in this work can be used in estimating the amplitudes of responses and their characteristics reasonably well.


2017 ◽  
Vol 7 (1) ◽  
pp. 101-115 ◽  
Author(s):  
Rui-Ping Wen ◽  
Su-Dan Li ◽  
Guo-Yan Meng

AbstractThere has been a lot of study on the SOR-like methods for solving the augmented system of linear equations since the outstanding work of Golub, Wu and Yuan (BIT 41(2001)71-85) was presented fifteen years ago. Based on the SOR-like methods, we establish a class of accelerated SOR-like methods for large sparse augmented linear systems by making use of optimization technique, which will find the optimal relaxation parameter ω by optimization models. We demonstrate the convergence theory of the new methods under suitable restrictions. The numerical examples show these methods are effective.


2016 ◽  
Vol 47 (2) ◽  
pp. 179-192
Author(s):  
Tesfaye Kebede Enyew

In this paper, a Second degree generalized Jacobi Iteration method for solving system of linear equations, $Ax=b$ and discuss about the optimal values $a_{1}$ and $b_{1}$ in terms of spectral radius about for the convergence of SDGJ method of $x^{(n+1)}=b_{1}[D_{m}^{-1}(L_{m}+U_{m})x^{(n)}+k_{1m}]-a_{1}x^{(n-1)}.$ Few numerical examples are considered to show that the effective of the Second degree Generalized Jacobi Iteration method (SDGJ) in comparison with FDJ, FDGJ, SDJ.


Author(s):  
G. K. Robinson

AbstractThe speed of convergence of stationary iterative techniques for solving simultaneous linear equations may be increased by using a method similar to conjugate gradients but which does not require the stationary iterative technique to be symmetrisable. The method of refinement is to find linear combinations of iterates from a stationary technique which minimise a quadratic form. This basic method may be used in several ways to construct refined versions of the simple technique. In particular, quadratic forms of much less than full rank may be used. It is suggested that the method is likely to be competitive with other techniques when the number of linear equations is very large and little is known about the properties of the system of equations. A refined version of the Gauss-Seidel technique was found to converge satisfactorily for two large systems of equations arising in the estimation of genetic merit of dairy cattle.


2012 ◽  
Vol 2012 ◽  
pp. 1-15 ◽  
Author(s):  
Xi Chen ◽  
Kok Kwang Phoon

Two solution schemes are proposed and compared for large 3D soil consolidation problems with nonassociated plasticity. One solution scheme results in the nonsymmetric linear equations due to the Newton iteration, while the other leads to the symmetric linear systems due to the symmetrized stiffness strategies. To solve the resulting linear systems, the QMR and SQMR solver are employed in conjunction with nonsymmetric and symmetric MSSOR preconditioner, respectively. A simple footing example and a pile-group example are used to assess the performance of the two solution schemes. Numerical results disclose that compared to the Newton iterative scheme, the symmetric stiffness schemes combined with adequate acceleration strategy may lead to a significant reduction in total computer runtime as well as in memory requirement, indicating that the accelerated symmetric stiffness method has considerable potential to be exploited to solve very large problems.


Sign in / Sign up

Export Citation Format

Share Document