scholarly journals Smoothed-Adaptive Perturbed Inverse Iteration for Elliptic Eigenvalue Problems

2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Stefano Giani ◽  
Luka Grubišić ◽  
Luca Heltai ◽  
Ornela Mulita

Abstract We present a perturbed subspace iteration algorithm to approximate the lowermost eigenvalue cluster of an elliptic eigenvalue problem. As a prototype, we consider the Laplace eigenvalue problem posed in a polygonal domain. The algorithm is motivated by the analysis of inexact (perturbed) inverse iteration algorithms in numerical linear algebra. We couple the perturbed inverse iteration approach with mesh refinement strategy based on residual estimators. We demonstrate our approach on model problems in two and three dimensions.

1993 ◽  
Vol 115 (3) ◽  
pp. 244-252 ◽  
Author(s):  
Matthias G. Döring ◽  
Jens Chr. Kalkkuhl ◽  
Wolfram Schröder

2015 ◽  
Vol 2015 ◽  
pp. 1-14 ◽  
Author(s):  
Christian Engström ◽  
Luka Grubišić

We present an algorithm for approximating an eigensubspace of a spectral component of an analytic Fredholm valued function. Our approach is based on numerical contour integration and the analytic Fredholm theorem. The presented method can be seen as a variant of the FEAST algorithm for infinite dimensional nonlinear eigenvalue problems. Numerical experiments illustrate the performance of the algorithm for polynomial and rational eigenvalue problems.


Acta Numerica ◽  
2010 ◽  
Vol 19 ◽  
pp. 1-120 ◽  
Author(s):  
Daniele Boffi

We discuss the finite element approximation of eigenvalue problems associated with compact operators. While the main emphasis is on symmetric problems, some comments are present for non-self-adjoint operators as well. The topics covered include standard Galerkin approximations, non-conforming approximations, and approximation of eigenvalue problems in mixed form. Some applications of the theory are presented and, in particular, the approximation of the Maxwell eigenvalue problem is discussed in detail. The final part tries to introduce the reader to the fascinating setting of differential forms and homological techniques with the description of the Hodge–Laplace eigenvalue problem and its mixed equivalent formulations. Several examples and numerical computations complete the paper, ranging from very basic exercises to more significant applications of the developed theory.


2018 ◽  
Vol 18 (2) ◽  
pp. 203-222 ◽  
Author(s):  
Melina A. Freitag ◽  
Patrick Kürschner ◽  
Jennifer Pestana

AbstractThe convergence of GMRES for solving linear systems can be influenced heavily by the structure of the right-hand side. Within the solution of eigenvalue problems via inverse iteration or subspace iteration, the right-hand side is generally related to an approximate invariant subspace of the linear system. We give detailed and new bounds on (block) GMRES that take the special behavior of the right-hand side into account and explain the initial sharp decrease of the GMRES residual. The bounds motivate the use of specific preconditioners for these eigenvalue problems, e.g., tuned and polynomial preconditioners, as we describe. The numerical results show that the new (block) GMRES bounds are much sharper than conventional bounds and that preconditioned subspace iteration with either a tuned or polynomial preconditioner should be used in practice.


Author(s):  
A. Y. T. Leung

The eigenvalue problem plays a central role in the dynamic and buckling analyses of engineering structures. In practice, one is interested in only a few dozens of the eigenmodes of a system of thousands of degrees of freedom within a particular eigenvalue range. For linear symmetric eigenproblems, [K]{x} = λ[M]{x}, the eigensolutions are well behaved. The recommendations are subspace iteration or the Lanczos method working with [A] = [K-λ0 M]−1 where λ0 is the middle of the eigenvalue range of interest. Subspace iteration gets both eigenvalues and eigenvectors. Lanczos gives the approximate eigenvalues which can easily be improved by inverse iteration to obtain the eigenvectors as by-products. For real nonsymmetric or complex symmetric linear eigenprohlems and polynomial eigenproblems, the eigensolutions may be defective. All classical methods, including subspace iteration fail. We recommend to use the Lanczos method to obtain the approximate eigenvalues of interest and to improve them by a new variance of inverse iteration, one vector at a time, and to get the independent generalised vectors as hy-products. We develop solution method for the special case that the approximate eigenvalue is indeed exact rendering a set of singular linear equations which can not be solved by existing algorithms.


Author(s):  
Jonathan Heinz ◽  
Miroslav Kolesik

A method is presented for transparent, energy-dependent boundary conditions for open, non-Hermitian systems, and is illustrated on an example of Stark resonances in a single-particle quantum system. The approach provides an alternative to external complex scaling, and is applicable when asymptotic solutions can be characterized at large distances from the origin. Its main benefit consists in a drastic reduction of the dimesnionality of the underlying eigenvalue problem. Besides application to quantum mechanics, the method can be used in other contexts such as in systems involving unstable optical cavities and lossy waveguides.


1966 ◽  
Vol 9 (05) ◽  
pp. 757-801 ◽  
Author(s):  
W. Kahan

The primordial problems of linear algebra are the solution of a system of linear equations and the solution of the eigenvalue problem for the eigenvalues λk, and corresponding eigenvectors of a given matrix A.


Author(s):  
Nikta Shayanfar ◽  
Heike Fassbender

The polynomial eigenvalue problem is to find the eigenpair of $(\lambda,x) \in \mathbb{C}\bigcup \{\infty\} \times \mathbb{C}^n \backslash \{0\}$ that satisfies $P(\lambda)x=0$, where $P(\lambda)=\sum_{i=0}^s P_i \lambda ^i$ is an $n\times n$ so-called matrix polynomial of degree $s$, where the coefficients $P_i, i=0,\cdots,s$, are $n\times n$ constant matrices, and $P_s$ is supposed to be nonzero. These eigenvalue problems arise from a variety of physical applications including acoustic structural coupled systems, fluid mechanics, multiple input multiple output systems in control theory, signal processing, and constrained least square problems. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Such methods convert the eigenvalue problem into a well-studied linear eigenvalue problem, and meanwhile, exploit and preserve the structure and properties of the original eigenvalue problem. The linearizations have been extensively studied with respect to the basis that the matrix polynomial is expressed in. If the matrix polynomial is expressed in a special basis, then it is desirable that its linearization be also expressed in the same basis. The reason is due to the fact that changing the given basis ought to be avoided \cite{H1}. The authors in \cite{ACL} have constructed linearization for different bases such as degree-graded ones (including monomial, Newton and Pochhammer basis), Bernstein and Lagrange basis. This contribution is concerned with polynomial eigenvalue problems in which the matrix polynomial is expressed in Hermite basis. In fact, Hermite basis is used for presenting matrix polynomials designed for matching a series of points and function derivatives at the prescribed nodes. In the literature, the linearizations of matrix polynomials of degree $s$, expressed in Hermite basis, consist of matrix pencils with $s+2$ blocks of size $n \times n$. In other words, additional eigenvalues at infinity had to be introduced, see e.g. \cite{CSAG}. In this research, we try to overcome this difficulty by reducing the size of linearization. The reduction scheme presented will gradually reduce the linearization to its minimal size making use of ideas from \cite{VMM1}. More precisely, for $n \times n$ matrix polynomials of degree $s$, we present linearizations of smaller size, consisting of $s+1$ and $s$ blocks of $n \times n$ matrices. The structure of the eigenvectors is also discussed.


Sign in / Sign up

Export Citation Format

Share Document