A Krylov subspace algorithm for multiquadric interpolation in many dimensions

2005 ◽  
Vol 25 (1) ◽  
pp. 1-24 ◽  
Author(s):  
A. C. Faul
2011 ◽  
Author(s):  
Isabelle G. Bajeux-Besnainou ◽  
Wachindra Bandara ◽  
Efstathia Bura

Author(s):  
Yuka Hashimoto ◽  
Takashi Nodera

AbstractThe Krylov subspace method has been investigated and refined for approximating the behaviors of finite or infinite dimensional linear operators. It has been used for approximating eigenvalues, solutions of linear equations, and operator functions acting on vectors. Recently, for time-series data analysis, much attention is being paid to the Krylov subspace method as a viable method for estimating the multiplications of a vector by an unknown linear operator referred to as a transfer operator. In this paper, we investigate a convergence analysis for Krylov subspace methods for estimating operator-vector multiplications.


2020 ◽  
Vol 28 (1) ◽  
pp. 15-32
Author(s):  
Silvia Gazzola ◽  
Paolo Novati

AbstractThis paper introduces and analyzes an original class of Krylov subspace methods that provide an efficient alternative to many well-known conjugate-gradient-like (CG-like) Krylov solvers for square nonsymmetric linear systems arising from discretizations of inverse ill-posed problems. The main idea underlying the new methods is to consider some rank-deficient approximations of the transpose of the system matrix, obtained by running the (transpose-free) Arnoldi algorithm, and then apply some Krylov solvers to a formally right-preconditioned system of equations. Theoretical insight is given, and many numerical tests show that the new solvers outperform classical Arnoldi-based or CG-like methods in a variety of situations.


Author(s):  
Shin-ichi Ito ◽  
Takeru Matsuda ◽  
Yuto Miyatake

AbstractWe consider a scalar function depending on a numerical solution of an initial value problem, and its second-derivative (Hessian) matrix for the initial value. The need to extract the information of the Hessian or to solve a linear system having the Hessian as a coefficient matrix arises in many research fields such as optimization, Bayesian estimation, and uncertainty quantification. From the perspective of memory efficiency, these tasks often employ a Krylov subspace method that does not need to hold the Hessian matrix explicitly and only requires computing the multiplication of the Hessian and a given vector. One of the ways to obtain an approximation of such Hessian-vector multiplication is to integrate the so-called second-order adjoint system numerically. However, the error in the approximation could be significant even if the numerical integration to the second-order adjoint system is sufficiently accurate. This paper presents a novel algorithm that computes the intended Hessian-vector multiplication exactly and efficiently. For this aim, we give a new concise derivation of the second-order adjoint system and show that the intended multiplication can be computed exactly by applying a particular numerical method to the second-order adjoint system. In the discussion, symplectic partitioned Runge–Kutta methods play an essential role.


Sign in / Sign up

Export Citation Format

Share Document