A global random walk on grid algorithm for second order elliptic equations

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Karl K. Sabelfeld ◽  
Dmitry Smirnov ◽  
Ivan Dimov ◽  
Venelin Todorov

Abstract In this paper we develop stochastic simulation methods for solving large systems of linear equations, and focus on two issues: (1) construction of global random walk algorithms (GRW), in particular, for solving systems of elliptic equations on a grid, and (2) development of local stochastic algorithms based on transforms to balanced transition matrix. The GRW method calculates the solution in any desired family of prescribed points of the gird in contrast to the classical stochastic differential equation based Feynman–Kac formula. The use in local random walk methods of balanced transition matrices considerably decreases the variance of the random estimators and hence decreases the computational cost in comparison with the conventional random walk on grids algorithms.

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Karl K. Sabelfeld ◽  
Dmitrii Smirnov

Abstract We suggest in this paper a global random walk on grid (GRWG) method for solving second order elliptic equations. The equation may have constant or variable coefficients. The GRWS method calculates the solution in any desired family of m prescribed points of the gird in contrast to the classical stochastic differential equation based Feynman–Kac formula, and the conventional random walk on spheres (RWS) algorithm as well. The method uses only N trajectories instead of mN trajectories in the RWS algorithm and the Feynman–Kac formula. The idea is based on the symmetry property of the Green function and a double randomization approach.


2013 ◽  
Vol 3 (2) ◽  
pp. 120-137 ◽  
Author(s):  
Jan Brandts ◽  
Ricardo R. da Silva

AbstractGiven two n × n matrices A and A0 and a sequence of subspaces with dim the k-th subspace-projected approximated matrix Ak is defined as Ak = A + Πk(A0 − A)Πk, where Πk is the orthogonal projection on . Consequently, Akν = Aν and ν*Ak = ν*A for all Thus is a sequence of matrices that gradually changes from A0 into An = A. In principle, the definition of may depend on properties of Ak, which can be exploited to try to force Ak+1 to be closer to A in some specific sense. By choosing A0 as a simple approximation of A, this turns the subspace-approximated matrices into interesting preconditioners for linear algebra problems involving A. In the context of eigenvalue problems, they appeared in this role in Shepard et al. (2001), resulting in their Subspace Projected Approximate Matrix method. In this article, we investigate their use in solving linear systems of equations Ax = b. In particular, we seek conditions under which the solutions xk of the approximate systems Akxk = b are computable at low computational cost, so the efficiency of the corresponding method is competitive with existing methods such as the Conjugate Gradient and the Minimal Residual methods. We also consider how well the sequence (xk)k≥0 approximates x, by performing some illustrative numerical tests.


2017 ◽  
Vol 42 (598) ◽  
Author(s):  
Ole Østerby

When solving parabolic equations in two space dimensions implicit methods are preferred to the explicit method because of their better stability properties. Straightforward implementation of implicit methods require time-consuming solution of large systems of linear equations, and ADI methods are preferred instead. We expect the ADI methods to inherit the stability properties of the implicit methods they are derived from, and we demonstrate that this is partly true. The Douglas-Rachford and Peaceman-Rachford methods are absolutely stable in the sense that their growth factors are ≤ 1 in absolute value. Near jump discontinuities, however, there are differences w.r.t. how the ADI methods react to the situation: do they produce oscillations and how effectively do they damp them. We demonstrate the behaviour on two simple examples.


2012 ◽  
Vol 13 (01n02) ◽  
pp. 1250001 ◽  
Author(s):  
MOHAMMAD H. AL-TOWAIQ ◽  
KHALED DAY

Network-on-chip multicore architectures with a large number of processing elements are becoming a reality with the recent developments in technology. In these modern systems the processing elements are interconnected with regular network-on-chip (NoC) topologies such as meshes and trees. In this paper we propose a parallel Gauss-Seidel (GS) iterative algorithm for solving large systems of linear equations on a torus NoC architecture. The proposed parallel algorithm is O (Nn2/k2) time complexity for solving a system with matrix of order n on a k × k torus NoC architecture with N iterations assuming n and N are large compared to k (i.e. for large linear systems that require a large number of iterations). We show that under these conditions the proposed parallel GS algorithm has near optimal speedup.


Sign in / Sign up

Export Citation Format

Share Document