Error vector choice in direct inversion in the iterative subspace method

10.1002/jcc.4 ◽  
1996 ◽  
Vol 17 (16) ◽  
pp. 1836-1847 ◽  
Author(s):  
Irina V. Ionova ◽  
Emily A. Carter
2012 ◽  
Vol 8 (12) ◽  
pp. 5175-5179 ◽  
Author(s):  
Joseph W. May ◽  
Jeremy D. Lehner ◽  
Michael J. Frisch ◽  
Xiaosong Li

2006 ◽  
Vol 418 (4-6) ◽  
pp. 359-360
Author(s):  
Rustam Z. Khaliullin ◽  
Martin Head-Gordon ◽  
Alexis T. Bell

Author(s):  
Yuka Hashimoto ◽  
Takashi Nodera

AbstractThe Krylov subspace method has been investigated and refined for approximating the behaviors of finite or infinite dimensional linear operators. It has been used for approximating eigenvalues, solutions of linear equations, and operator functions acting on vectors. Recently, for time-series data analysis, much attention is being paid to the Krylov subspace method as a viable method for estimating the multiplications of a vector by an unknown linear operator referred to as a transfer operator. In this paper, we investigate a convergence analysis for Krylov subspace methods for estimating operator-vector multiplications.


Author(s):  
Shin-ichi Ito ◽  
Takeru Matsuda ◽  
Yuto Miyatake

AbstractWe consider a scalar function depending on a numerical solution of an initial value problem, and its second-derivative (Hessian) matrix for the initial value. The need to extract the information of the Hessian or to solve a linear system having the Hessian as a coefficient matrix arises in many research fields such as optimization, Bayesian estimation, and uncertainty quantification. From the perspective of memory efficiency, these tasks often employ a Krylov subspace method that does not need to hold the Hessian matrix explicitly and only requires computing the multiplication of the Hessian and a given vector. One of the ways to obtain an approximation of such Hessian-vector multiplication is to integrate the so-called second-order adjoint system numerically. However, the error in the approximation could be significant even if the numerical integration to the second-order adjoint system is sufficiently accurate. This paper presents a novel algorithm that computes the intended Hessian-vector multiplication exactly and efficiently. For this aim, we give a new concise derivation of the second-order adjoint system and show that the intended multiplication can be computed exactly by applying a particular numerical method to the second-order adjoint system. In the discussion, symplectic partitioned Runge–Kutta methods play an essential role.


Author(s):  
Jonas Dünnebacke ◽  
Stefan Turek ◽  
Christoph Lohmann ◽  
Andriy Sokolov ◽  
Peter Zajac

We discuss how “parallel-in-space & simultaneous-in-time” Newton-multigrid approaches can be designed which improve the scaling behavior of the spatial parallelism by reducing the latency costs. The idea is to solve many time steps at once and therefore solving fewer but larger systems. These large systems are reordered and interpreted as a space-only problem leading to multigrid algorithm with semi-coarsening in space and line smoothing in time direction. The smoother is further improved by embedding it as a preconditioner in a Krylov subspace method. As a prototypical application, we concentrate on scalar partial differential equations (PDEs) with up to many thousands of time steps which are discretized in time, resp., space by finite difference, resp., finite element methods. For linear PDEs, the resulting method is closely related to multigrid waveform relaxation and its theoretical framework. In our parabolic test problems the numerical behavior of this multigrid approach is robust w.r.t. the spatial and temporal grid size and the number of simultaneously treated time steps. Moreover, we illustrate how corresponding time-simultaneous fixed-point and Newton-type solvers can be derived for nonlinear nonstationary problems that require the described solution of linearized problems in each outer nonlinear step. As the main result, we are able to generate much larger problem sizes to be treated by a large number of cores so that the combination of the robustly scaling multigrid solvers together with a larger degree of parallelism allows a faster solution procedure for nonstationary problems.


Sign in / Sign up

Export Citation Format

Share Document