PARALLEL ITERATIVE REGULARIZATION ALGORITHMS FOR LARGE OVERDETERMINED LINEAR SYSTEMS

2010 ◽  
Vol 07 (04) ◽  
pp. 525-537 ◽  
Author(s):  
PHAM KY ANH ◽  
VU TIEN DUNG

In this paper, we study the performance of some parallel iterative regularization methods for solving large overdetermined systems of linear equations.

2010 ◽  
Vol 104 (2) ◽  
pp. 160
Author(s):  
Sarah B. Bush

I often think back to a vivid memory from my student-teaching experience. Then, I naively believed that the weeks spent with my first-year algebra class discussing and practicing the art of solving systems of linear equations by graphing, substitution, and elimination was a success. But just at that point the students started asking revealing questions such as “How do you know which method to pick so that you get the correct solution?” and “Which systems go with which methods?” I then realized that my instruction had failed to guide my students toward conceptualizing the big picture of linear systems and instead had left them with a procedure they did not know how to apply. At that juncture I decided to try this discovery-oriented lesson.


2017 ◽  
Vol 7 (1) ◽  
pp. 143-155 ◽  
Author(s):  
Jing Wang ◽  
Xue-Ping Guo ◽  
Hong-Xiu Zhong

AbstractPreconditioned modified Hermitian and skew-Hermitian splitting method (PMHSS) is an unconditionally convergent iteration method for solving large sparse complex symmetric systems of linear equations, and uses one parameter α. Adding another parameter β, the generalized PMHSS method (GPMHSS) is essentially a twoparameter iteration method. In order to accelerate the GPMHSS method, using an unexpected way, we propose an accelerated GPMHSS method (AGPMHSS) for large complex symmetric linear systems. Numerical experiments show the numerical behavior of our new method.


2021 ◽  
Vol 24 (1) ◽  
Author(s):  
Ernesto Dufrechou

Many problems, in diverse areas of science and engineering, involve the solution of largescale sparse systems of linear equations. In most of these scenarios, they are also a computational bottleneck, and therefore their efficient solution on parallel architectureshas motivated a tremendous volume of research.This dissertation targets the use of GPUs to enhance the performance of the solution of sparse linear systems using iterative methods complemented with state-of-the-art preconditioned techniques. In particular, we study ILUPACK, a package for the solution of sparse linear systems via Krylov subspace methods that relies on a modern inverse-based multilevel ILU (incomplete LU) preconditioning technique.We present new data-parallel versions of the preconditioner and the most important solvers contained in the package that significantly improve its performance without affecting its accuracy. Additionally we enhance existing task-parallel versions of ILUPACK for shared- and distributed-memory systems with the inclusion of GPU acceleration. The results obtained show a sensible reduction in the runtime of the methods, as well as the possibility of addressing large-scale problems efficiently.


Sign in / Sign up

Export Citation Format

Share Document