On Single Precision Preconditioners for Krylov Subspace Iterative Methods

Author(s):  
Hiroto Tadano ◽  
Tetsuya Sakurai
2019 ◽  
Vol 13 ◽  
pp. 174830261986173 ◽  
Author(s):  
Jae H Yun

In this paper, we consider performance of relaxation iterative methods for four types of image deblurring problems with different regularization terms. We first study how to apply relaxation iterative methods efficiently to the Tikhonov regularization problems, and then we propose how to find good preconditioners and near optimal relaxation parameters which are essential factors for fast convergence rate and computational efficiency of relaxation iterative methods. We next study efficient applications of relaxation iterative methods to Split Bregman method and the fixed point method for solving the L1-norm or total variation regularization problems. Lastly, we provide numerical experiments for four types of image deblurring problems to evaluate the efficiency of relaxation iterative methods by comparing their performances with those of Krylov subspace iterative methods. Numerical experiments show that the proposed techniques for finding preconditioners and near optimal relaxation parameters of relaxation iterative methods work well for image deblurring problems. For the L1-norm and total variation regularization problems, Split Bregman and fixed point methods using relaxation iterative methods perform quite well in terms of both peak signal to noise ratio values and execution time as compared with those using Krylov subspace methods.


2013 ◽  
Vol 2013 ◽  
pp. 1-6
Author(s):  
Wei-Hua Luo ◽  
Ting-Zhu Huang

By using Sherman-Morrison-Woodbury formula, we introduce a preconditioner based on parameterized splitting idea for generalized saddle point problems which may be singular and nonsymmetric. By analyzing the eigenvalues of the preconditioned matrix, we find that whenαis big enough, it has an eigenvalue at 1 with multiplicity at leastn, and the remaining eigenvalues are all located in a unit circle centered at 1. Particularly, when the preconditioner is used in general saddle point problems, it guarantees eigenvalue at 1 with the same multiplicity, and the remaining eigenvalues will tend to 1 as the parameterα→0. Consequently, this can lead to a good convergence when some GMRES iterative methods are used in Krylov subspace. Numerical results of Stokes problems and Oseen problems are presented to illustrate the behavior of the preconditioner.


2018 ◽  
Vol 63 ◽  
pp. 1-43
Author(s):  
C. Vuik

In these lecture notes an introduction to Krylov subspace solvers and preconditioners is presented. After a discretization of partial differential equations large, sparse systems of linear equations have to be solved. Fast solution of these systems is very urgent nowadays. The size of the problems can be 1013 unknowns and 1013 equations. Iterative solution methods are the methods of choice for these large linear systems. We start with a short introduction of Basic Iterative Methods. Thereafter preconditioned Krylov subspace methods, which are state of the art, are describeed. A distinction is made between various classes of matrices. At the end of the lecture notes many references are given to state of the art Scientific Computing methods. Here, we will discuss a number of books which are nice to use for an overview of background material. First of all the books of Golub and Van Loan [19] and Horn and Johnson [26] are classical works on all aspects of numerical linear algebra. These books also contain most of the material, which is used for direct solvers. Varga [50] is a good starting point to study the theory of basic iterative methods. Krylov subspace methods and multigrid are discussed in Saad [38] and Trottenberg, Oosterlee and Schüller [42]. Other books on Krylov subspace methods are [1, 6, 21, 34, 39].


Acta Numerica ◽  
1992 ◽  
Vol 1 ◽  
pp. 57-100 ◽  
Author(s):  
Roland W. Freund ◽  
Gene H. Golub ◽  
Noël M. Nachtigal

Recent advances in the field of iterative methods for solving large linear systems are reviewed. The main focus is on developments in the area of conjugate gradient-type algorithms and Krylov subspace methods for nonHermitian matrices.


2013 ◽  
Vol 11 (8) ◽  
Author(s):  
Zahari Zlatev ◽  
Krassimir Georgiev

AbstractMany problems arising in different fields of science and engineering can be reduced, by applying some appropriate discretization, either to a system of linear algebraic equations or to a sequence of such systems. The solution of a system of linear algebraic equations is very often the most time-consuming part of the computational process during the treatment of the original problem, because these systems can be very large (containing up to many millions of equations). It is, therefore, important to select fast, robust and reliable methods for their solution, also in the case where fast modern computers are available. Since the coefficient matrices of the systems are normally sparse (i.e. most of their elements are zeros), the first requirement is to efficiently exploit the sparsity. However, this is normally not sufficient when the systems are very large. The computation of preconditioners based on approximate LU-factorizations and their use in the efforts to increase further the efficiency of the calculations will be discussed in this paper. Computational experiments based on comprehensive comparisons of many numerical results that are obtained by using ten well-known methods for solving systems of linear algebraic equations (the direct Gaussian elimination and nine iterative methods) will be reported. Most of the considered methods are preconditioned Krylov subspace algorithms.


2020 ◽  
Vol 2020 ◽  
pp. 1-8 ◽  
Author(s):  
K. Niazi Asil ◽  
M. Ghasemi Kamalvand

The indefinite inner product defined by J=diagj1,…,jn, jk∈−1,+1, arises frequently in some applications, such as the theory of relativity and the research of the polarized light. This indefinite scalar product is referred to as hyperbolic inner product. In this paper, we introduce three indefinite iterative methods: indefinite Arnoldi’s method, indefinite Lanczos method (ILM), and indefinite full orthogonalization method (IFOM). The indefinite Arnoldi’s method is introduced as a process that constructs a J-orthonormal basis for the nondegenerated Krylov subspace. The ILM method is introduced as a special case of the indefinite Arnoldi’s method for J-Hermitian matrices. IFOM is mentioned as a process for solving linear systems of equations with J-Hermitian coefficient matrices. Finally, by providing numerical examples, the FOM, IFOM, and ILM processes have been compared with each other in terms of the required time for solving linear systems and also from the point of the number of iterations.


2016 ◽  
Vol 9 (2) ◽  
pp. 289-314 ◽  
Author(s):  
Wujian Peng ◽  
Qun Lin

AbstractMost current prevalent iterative methods can be classified into the socalled extended Krylov subspace methods, a class of iterative methods which do not fall into this category are also proposed in this paper. Comparing with traditional Krylov subspace methods which always depend on the matrix-vector multiplication with a fixed matrix, the newly introduced methods (the so-called (progressively) accumulated projection methods, or AP (PAP) for short) use a projection matrix which varies in every iteration to form a subspace from which an approximate solution is sought. More importantly an accelerative approach (called APAP) is introduced to improve the convergence of PAP method. Numerical experiments demonstrate some surprisingly improved convergence behavior. Comparison between benchmark extended Krylov subspace methods (Block Jacobi and GMRES) are made and one can also see remarkable advantage of APAP in some examples. APAP is also used to solve systems with extremely ill-conditioned coefficient matrix (the Hilbert matrix) and numerical experiments shows that it can bring very satisfactory results even when the size of system is up to a few thousands.


Sign in / Sign up

Export Citation Format

Share Document