scholarly journals Trace minimization method via penalty for linear response eigenvalue problems

2021 ◽  
Vol 0 (0) ◽  
pp. 0
Author(s):  
Yadan Chen ◽  
Yuan Shen ◽  
Shanshan Liu

<p style='text-indent:20px;'>In various applications, such as the computation of energy excitation states of electrons and molecules, and the analysis of interstellar clouds, the linear response eigenvalue problem, which is a special type of the Hamiltonian eigenvalue problem, is frequently encountered. However, traditional eigensolvers may not be applicable to this problem owing to its inherently large scale. In fact, we are usually more interested in computing some of the smallest positive eigenvalues. To this end, a trace minimum principle optimization model with orthogonality constraint has been proposed. On this basis, we propose an unconstrained surrogate model called trace minimization via penalty, and we establish its equivalence with the original constrained model, provided that the penalty parameter is larger than a certain threshold. By avoiding the orthogonality constraint, we can use a gradient-type method to solve this model. Specifically, we use the gradient descent method with Barzilai–Borwein step size. Moreover, we develop a restarting strategy for the proposed algorithm whereby higher accuracy and faster convergence can be achieved. This is verified by preliminary experimental results.</p>

1992 ◽  
Vol 70 (2) ◽  
pp. 296-300 ◽  
Author(s):  
Susumu Narita ◽  
Tai-ichi Shibuya

A new method is proposed for obtaining a few eigenvalues and eigenvectors of a large-scale RPA-type equation. Some numerical tests are carried out to study the convergence behaviors of this method. It is found that the convergence rate is very fast and quite satisfactory. It depends strongly on the way of estimating the deviation vectors. Our proposed scheme gives a better estimation for the deviation vectors than Davidson's scheme. This scheme is applicable to the eigenvalue problems of nondiagonally dominant matrices as well. Keywords: large-scale eigenvalue problem, RPA-type equation, fast convergence.


Author(s):  
Fei Xu ◽  
Liu Chen ◽  
Qiumei Huang

In this paper, we propose a local defect-correction method for solving the Steklov eigenvalue problem arising from the scalar second order positive definite partial differential equations based on the multilevel discretization. The objective is to avoid solving large-scale equations especially the large-scale Steklov eigenvalue problem whose computational cost increases exponentially. The proposed algorithm transforms the Steklov eigenvalue problem into a series of linear boundary value problems, which are defined in a multigrid space sequence, and a series of small-scale Steklov eigenvalue problems in a coarse correction space. Furthermore, we use the local defect-correction technique to divide the large-scale boundary value problems into small-scale subproblems. Through our proposed algorithm, we avoid solving large-scale Steklov eigenvalue problems. As a result, our proposed algorithm demonstrates significantly improved the solving efficiency. Additionally, we conduct numerical experiments and a rigorous theoretical analysis to verify the effectiveness of our proposed approach.


2013 ◽  
Vol 2013 ◽  
pp. 1-8
Author(s):  
Xin-long Luo ◽  
Jia-ru Lin ◽  
Wei-ling Wu

This paper gives a new prediction-correction method based on the dynamical system of differential-algebraic equations for the smallest generalized eigenvalue problem. First, the smallest generalized eigenvalue problem is converted into an equivalent-constrained optimization problem. Second, according to the Karush-Kuhn-Tucker conditions of this special equality-constrained problem, a special continuous dynamical system of differential-algebraic equations is obtained. Third, based on the implicit Euler method and an analogous trust-region technique, a prediction-correction method is constructed to follow this system of differential-algebraic equations to compute its steady-state solution. Consequently, the smallest generalized eigenvalue of the original problem is obtained. The local superlinear convergence property for this new algorithm is also established. Finally, in comparison with other methods, some promising numerical experiments are presented.


Author(s):  
Andrew Jacobsen ◽  
Matthew Schlegel ◽  
Cameron Linke ◽  
Thomas Degris ◽  
Adam White ◽  
...  

This paper investigates different vector step-size adaptation approaches for non-stationary online, continual prediction problems. Vanilla stochastic gradient descent can be considerably improved by scaling the update with a vector of appropriately chosen step-sizes. Many methods, including AdaGrad, RMSProp, and AMSGrad, keep statistics about the learning process to approximate a second order update—a vector approximation of the inverse Hessian. Another family of approaches use meta-gradient descent to adapt the stepsize parameters to minimize prediction error. These metadescent strategies are promising for non-stationary problems, but have not been as extensively explored as quasi-second order methods. We first derive a general, incremental metadescent algorithm, called AdaGain, designed to be applicable to a much broader range of algorithms, including those with semi-gradient updates or even those with accelerations, such as RMSProp. We provide an empirical comparison of methods from both families. We conclude that methods from both families can perform well, but in non-stationary prediction problems the meta-descent methods exhibit advantages. Our method is particularly robust across several prediction problems, and is competitive with the state-of-the-art method on a large-scale, time-series prediction problem on real data from a mobile robot.


2019 ◽  
Vol 17 (1) ◽  
pp. 653-667
Author(s):  
Zhongming Teng ◽  
Hong-Xiu Zhong

Abstract In the linear response eigenvalue problem arising from computational quantum chemistry and physics, one needs to compute a few of smallest positive eigenvalues together with the corresponding eigenvectors. For such a task, most of efficient algorithms are based on an important notion that is the so-called pair of deflating subspaces. If a pair of deflating subspaces is at hand, the computed approximated eigenvalues are partial eigenvalues of the linear response eigenvalue problem. In the case the pair of deflating subspaces is not available, only approximate one, in a recent paper [SIAM J. Matrix Anal. Appl., 35(2), pp.765-782, 2014], Zhang, Xue and Li obtained the relationships between the accuracy in eigenvalue approximations and the distances from the exact deflating subspaces to their approximate ones. In this paper, we establish majorization type results for these relationships. From our majorization results, various bounds are readily available to estimate how accurate the approximate eigenvalues based on information on the approximate accuracy of a pair of approximate deflating subspaces. These results will provide theoretical foundations for assessing the relative performance of certain iterative methods in the linear response eigenvalue problem.


Author(s):  
Jonathan Heinz ◽  
Miroslav Kolesik

A method is presented for transparent, energy-dependent boundary conditions for open, non-Hermitian systems, and is illustrated on an example of Stark resonances in a single-particle quantum system. The approach provides an alternative to external complex scaling, and is applicable when asymptotic solutions can be characterized at large distances from the origin. Its main benefit consists in a drastic reduction of the dimesnionality of the underlying eigenvalue problem. Besides application to quantum mechanics, the method can be used in other contexts such as in systems involving unstable optical cavities and lossy waveguides.


Author(s):  
Nikta Shayanfar ◽  
Heike Fassbender

The polynomial eigenvalue problem is to find the eigenpair of $(\lambda,x) \in \mathbb{C}\bigcup \{\infty\} \times \mathbb{C}^n \backslash \{0\}$ that satisfies $P(\lambda)x=0$, where $P(\lambda)=\sum_{i=0}^s P_i \lambda ^i$ is an $n\times n$ so-called matrix polynomial of degree $s$, where the coefficients $P_i, i=0,\cdots,s$, are $n\times n$ constant matrices, and $P_s$ is supposed to be nonzero. These eigenvalue problems arise from a variety of physical applications including acoustic structural coupled systems, fluid mechanics, multiple input multiple output systems in control theory, signal processing, and constrained least square problems. Most numerical approaches to solving such eigenvalue problems proceed by linearizing the matrix polynomial into a matrix pencil of larger size. Such methods convert the eigenvalue problem into a well-studied linear eigenvalue problem, and meanwhile, exploit and preserve the structure and properties of the original eigenvalue problem. The linearizations have been extensively studied with respect to the basis that the matrix polynomial is expressed in. If the matrix polynomial is expressed in a special basis, then it is desirable that its linearization be also expressed in the same basis. The reason is due to the fact that changing the given basis ought to be avoided \cite{H1}. The authors in \cite{ACL} have constructed linearization for different bases such as degree-graded ones (including monomial, Newton and Pochhammer basis), Bernstein and Lagrange basis. This contribution is concerned with polynomial eigenvalue problems in which the matrix polynomial is expressed in Hermite basis. In fact, Hermite basis is used for presenting matrix polynomials designed for matching a series of points and function derivatives at the prescribed nodes. In the literature, the linearizations of matrix polynomials of degree $s$, expressed in Hermite basis, consist of matrix pencils with $s+2$ blocks of size $n \times n$. In other words, additional eigenvalues at infinity had to be introduced, see e.g. \cite{CSAG}. In this research, we try to overcome this difficulty by reducing the size of linearization. The reduction scheme presented will gradually reduce the linearization to its minimal size making use of ideas from \cite{VMM1}. More precisely, for $n \times n$ matrix polynomials of degree $s$, we present linearizations of smaller size, consisting of $s+1$ and $s$ blocks of $n \times n$ matrices. The structure of the eigenvectors is also discussed.


Sign in / Sign up

Export Citation Format

Share Document