scholarly journals Low-Rank Updates of Matrix Functions II: Rational Krylov Methods

2021 ◽  
Vol 59 (3) ◽  
pp. 1325-1347
Author(s):  
Bernhard Beckermann ◽  
Alice Cortinovis ◽  
Daniel Kressner ◽  
Marcel Schweitzer
2018 ◽  
Vol 25 (6) ◽  
pp. e2176 ◽  
Author(s):  
Elias Jarlebring ◽  
Giampaolo Mele ◽  
Davide Palitta ◽  
Emil Ringh
Keyword(s):  
Low Rank ◽  

Algorithms ◽  
2020 ◽  
Vol 13 (4) ◽  
pp. 100 ◽  
Author(s):  
Luca Bergamaschi

The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , … arising in many scientific applications, such as discretization of transient Partial Differential Equations (PDEs), solution of eigenvalue problems, (Inexact) Newton methods applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. In this paper, we will analyze a number of techniques of updating a given initial preconditioner by a low-rank matrix with the aim of improving the clustering of eigenvalues around 1, in order to speed-up the convergence of the Preconditioned Conjugate Gradient (PCG) method. We will also review some techniques to efficiently approximate the linearly independent vectors which constitute the low-rank corrections and whose choice is crucial for the effectiveness of the approach. Numerical results on real-life applications show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates.


2020 ◽  
Vol 41 (4) ◽  
pp. 1477-1504
Author(s):  
Silvia Gazzola ◽  
Chang Meng ◽  
James G. Nagy
Keyword(s):  
Low Rank ◽  

2018 ◽  
Vol 39 (1) ◽  
pp. 539-565 ◽  
Author(s):  
Bernhard Beckermann ◽  
Daniel Kressner ◽  
Marcel Schweitzer
Keyword(s):  
Low Rank ◽  

Author(s):  
Luca Bergamaschi

The aim of this survey is to review some recent developements in devising efficient preconditioners for sequences of linear systems A x = b. Such a problem arise in many scientific applications, such as discretization of transient PDEs, solution of eigenvalue problems, (Inexact) Newton method applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. Full purpose preconditioners such as the Incomplete Cholesky (IC) factorization or approximate inverses are aimed at clustering eigenvalues of the preconditioned matrices around one. In this paper we will analyze a number of techniques of updating a given IC preconditioner (which we denote as P0 in the sequel) by a low-rank matrix with the aim of further improving this clustering. The most popular low-rank strategies are aimed at removing the smallest eigenvalues (deflation) or at shifting them towards the middle of the spectrum. The low-rank correction is based on a (small) number of linearly independent vectors whose choice is crucial for the effectiveness of the approach. In many cases these vectors are approximations of eigenvectors corresponding to the smallest eigenvalues of the preconditioned matrix P0 A. We will also review some techniques to efficiently approximate these vectors when incorporated within a sequence of linear systems all possibly having constant (or slightly changing) coefficient matrices. Numerical results concerning sequences arising from discretization of linear/nonlinear PDEs and iterative solution of eigenvalue problems show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates.


Acta Numerica ◽  
2020 ◽  
Vol 29 ◽  
pp. 403-572
Author(s):  
Per-Gunnar Martinsson ◽  
Joel A. Tropp

This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems. It focuses on techniques that have a proven track record for real-world problems. The paper treats both the theoretical foundations of the subject and practical computational issues.Topics include norm estimation, matrix approximation by sampling, structured and unstructured random embeddings, linear regression problems, low-rank approximation, subspace iteration and Krylov methods, error estimation and adaptivity, interpolatory and CUR factorizations, Nyström approximation of positive semidefinite matrices, single-view (‘streaming’) algorithms, full rank-revealing factorizations, solvers for linear systems, and approximation of kernel matrices that arise in machine learning and in scientific computing.


Author(s):  
Davide Palitta ◽  
Patrick Kürschner

AbstractLow-rank Krylov methods are one of the few options available in the literature to address the numerical solution of large-scale general linear matrix equations. These routines amount to well-known Krylov schemes that have been equipped with a couple of low-rank truncations to maintain a feasible storage demand in the overall solution procedure. However, such truncations may affect the convergence properties of the adopted Krylov method. In this paper we show how the truncation steps have to be performed in order to maintain the convergence of the Krylov routine. Several numerical experiments validate our theoretical findings.


Sign in / Sign up

Export Citation Format

Share Document