inner iteration
Recently Published Documents


TOTAL DOCUMENTS

27
(FIVE YEARS 11)

H-INDEX

6
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jutao Zhao ◽  
Pengfei Guo

The Jacobi–Davidson iteration method is very efficient in solving Hermitian eigenvalue problems. If the correction equation involved in the Jacobi–Davidson iteration is solved accurately, the simplified Jacobi–Davidson iteration is equivalent to the Rayleigh quotient iteration which achieves cubic convergence rate locally. When the involved linear system is solved by an iteration method, these two methods are also equivalent. In this paper, we present the convergence analysis of the simplified Jacobi–Davidson method and present the estimate of iteration numbers of the inner correction equation. Furthermore, based on the convergence factor, we can see how the accuracy of the inner iteration controls the outer iteration.


2021 ◽  
Vol 2021 ◽  
pp. 1-18
Author(s):  
Yu-Ye Feng ◽  
Qing-Biao Wu

For solving the large sparse linear systems with 2 × 2 block structure, the generalized successive overrelaxation (GSOR) iteration method is an efficient iteration method. Based on the GSOR method, the PGSOR method introduces a preconditioned matrix with a new parameter for the coefficient matrix which can enhance the efficiency. To solve the nonlinear systems in which the Jacobian matrices are complex and symmetric with the block two-by-two form, we try to use the PGSOR method as an inner iteration, with the help of the modified Newton method as an efficient outer iteration method. This new method is called the modified Newton-PGSOR (MN-PGSOR) method. Local convergence properties of the MN-PGSOR are analyzed under the Hölder condition. Finally, we give the comparison of our new method with some previous methods in the numerical results. The MN-PGSOR method is superior in both iteration steps and computing time.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Miloud Sadkane

Abstract An inexact variant of inverse subspace iteration is used to find a small invariant pair of a large quadratic matrix polynomial. It is shown that linear convergence is preserved provided the inner iteration is performed with increasing accuracy. A preconditioned block GMRES solver is employed as inner iteration. The preconditioner uses the strategy of “tuning” which prevents the inner iteration from increasing and therefore results in a substantial saving in costs. The accuracy of the computed invariant pair can be improved by the addition of a post-processing step involving very few iterations of Newton’s method. The effectiveness of the proposed approach is demonstrated by numerical experiments.


2021 ◽  
pp. S345-S366
Author(s):  
Yi-Shu Du ◽  
Ken Hayami ◽  
Ning Zheng ◽  
Keiichi Morikuni ◽  
Jun-Feng Yin

2021 ◽  
Vol 40 (3) ◽  
Author(s):  
Lv Zhang ◽  
Qing-Biao Wu ◽  
Min-Hong Chen ◽  
Rong-Fei Lin

AbstractIn this paper, we mainly discuss the iterative methods for solving nonlinear systems with complex symmetric Jacobian matrices. By applying an FPAE iteration (a fixed-point iteration adding asymptotical error) as the inner iteration of the Newton method and modified Newton method, we get the so–called Newton-FPAE method and modified Newton-FPAE method. The local and semi-local convergence properties under Lipschitz condition are analyzed. Finally, some numerical examples are given to expound the feasibility and validity of the two new methods by comparing them with some other iterative methods.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Pallavi Mahale ◽  
Sharad Kumar Dixit

AbstractIn 2012, Jin Qinian considered an inexact Newton–Landweber iterative method for solving nonlinear ill-posed operator equations in the Banach space setting by making use of duality mapping. The method consists of two steps; the first one is an inner iteration which gives increments by using Landweber iteration, and the second one is an outer iteration which provides increments by using Newton iteration. He has proved a convergence result for the exact data case, and for the perturbed data case, a weak convergence result has been obtained under a Morozov type stopping rule. However, no error bound has been given. In 2013, Kaltenbacher and Tomba have considered the modified version of the Newton–Landweber iterations, in which the combination of the outer Newton loop with an iteratively regularized Landweber iteration has been used. The convergence rate result has been obtained under a Hölder type source condition. In this paper, we study the modified version of inexact Newton–Landweber iteration under the approximate source condition and will obtain an order-optimal error estimate under a suitable choice of stopping rules for the inner and outer iterations. We will also show that the results proved in this paper are more general as compared to the results proved by Kaltenbacher and Tomba in 2013. Also, we will give a numerical example of a parameter identification problem to support our method.


2020 ◽  
Vol 20 (2) ◽  
pp. 343-359
Author(s):  
Rayan Nasser ◽  
Miloud Sadkane

AbstractThis paper focuses on the inner iteration that arises in inexact inverse subspace iteration for computing a small deflating subspace of a large matrix pencil. First, it is shown that the method achieves linear rate of convergence if the inner iteration is performed with increasing accuracy. Then, as inner iteration, block-GMRES is used with preconditioners generalizing the one by Robbé, Sadkane and Spence [Inexact inverse subspace iteration with preconditioning applied to non-Hermitian eigenvalue problems, SIAM J. Matrix Anal. Appl. 31 2009, 1, 92–113]. It is shown that the preconditioners help to maintain the number of iterations needed by block-GMRES to approximately a small constant. The efficiency of the preconditioners is illustrated by numerical examples.


2020 ◽  
Vol 28 (1) ◽  
pp. 145-153 ◽  
Author(s):  
Andreas Neubauer

AbstractIn this paper we prove order optimality of an inexact Newton regularization method, where the linearized equations are solved approximately using the conjugate gradient method. The outer and inner iterations are stopped via the discrepancy principle. We show that the conditions needed for convergence rates are satisfied for a certain parameter identification problem.


2019 ◽  
Vol 29 (7) ◽  
pp. 2179-2205
Author(s):  
Chih-Hao Chen ◽  
Siva Nadarajah

Purpose This paper aims to present a dynamically adjusted deflated restarting procedure for the generalized conjugate residual method with an inner orthogonalization (GCRO) method. Design/methodology/approach The proposed method uses a GCR solver for the outer iteration and the generalized minimal residual (GMRES) with deflated restarting in the inner iteration. Approximate eigenpairs are evaluated at the end of each inner GMRES restart cycle. The approach determines the number of vectors to be deflated from the spectrum based on the number of negative Ritz values, k∗. Findings The authors show that the approach restores convergence to cases where GMRES with restart failed and compare the approach against standard GMRES with restarts and deflated restarting. Efficiency is demonstrated for a 2D NACA 0012 airfoil and a 3D common research model wing. In addition, numerical experiments confirm the scalability of the solver. Originality/value This paper proposes an extension of dynamic deflated restarting into the traditional GCRO method to improve convergence performance with a significant reduction in the memory usage. The novel deflation strategy involves selecting the number of deflated vectors per restart cycle based on the number of negative harmonic Ritz eigenpairs and defaulting to standard restarted GMRES within the inner loop if none, and restricts the deflated vectors to the smallest eigenvalues present in the modified Hessenberg matrix.


Sign in / Sign up

Export Citation Format

Share Document