A Low-Rank Approximation for Computing the Matrix Exponential Norm

2011 ◽  
Vol 32 (2) ◽  
pp. 349-363 ◽  
Author(s):  
Yuri M. Nechepurenko ◽  
Miloud Sadkane
Author(s):  
Д.А. Желтков ◽  
Е.Е. Тыртышников

Матричный крестовый метод является быстрым методом аппроксимации матриц матрицами малого ранга, его сложность составляет $O((m+n)r^2)$ операций. Важной особенностью является то, что если матрица задана не как хранящийся в памяти массив, а как функция от двух целочисленных аргументов, то можно найти еe малоранговое приближение, вычислив лишь $O((m+n)r)$ значений этой функции. Однако в случае сверхбольших размеров матрицы или крайней затратности вычисления еe элементов аппроксимация может занимать существенное время. Ускорить метод для подобных случаев можно с помощью параллельных алгоритмов. В настоящей статье предложен эффективный параллельный алгоритм для случая одинаковой сложности вычисления любого элемента матрицы. The matrix cross approximation method is a fast method based on low-rank matrix approximations with complexity $O((m+n)r^2)$ arithmetic operations. Its main feature consists in the following: if a matrix is not given as an array but is given as a function of two integer arguments, then this method allows one to compute the low-rank approximation of the given matrix by evaluating only $O((m+n)r)$ values of this function. However, if the matrix is extremely large or the evaluation of its elements is computationally expensive, then such an approximation becomes timeconsuming. For such cases, the performance of the method can be improved via parallelization. In this paper we propose an efficient parallel algorithm for the case of an equal computational cost for the evaluation of each matrix element.


Author(s):  
Gianluca Ceruti ◽  
Christian Lubich

AbstractWe propose and analyse a numerical integrator that computes a low-rank approximation to large time-dependent matrices that are either given explicitly via their increments or are the unknown solution to a matrix differential equation. Furthermore, the integrator is extended to the approximation of time-dependent tensors by Tucker tensors of fixed multilinear rank. The proposed low-rank integrator is different from the known projector-splitting integrator for dynamical low-rank approximation, but it retains the important robustness to small singular values that has so far been known only for the projector-splitting integrator. The new integrator also offers some potential advantages over the projector-splitting integrator: It avoids the backward time integration substep of the projector-splitting integrator, which is a potentially unstable substep for dissipative problems. It offers more parallelism, and it preserves symmetry or anti-symmetry of the matrix or tensor when the differential equation does. Numerical experiments illustrate the behaviour of the proposed integrator.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yong Zeng ◽  
Yixin Li ◽  
Zhongyuan Jiang ◽  
Jianfeng Ma

It is crucial to generate random graphs with specific structural properties from real graphs, which could anonymize graphs or generate targeted graph data sets. The state-of-the-art method called spectral graph forge (SGF) was proposed at INFOCOM 2018. This method uses a low-rank approximation of the matrix by throwing away some spectrums, which provides privacy protection after distributing graphs while ensuring data availability to a certain extent. As shown in SGF, it needs to discard at least 20% spectrum to defend against deanonymous attacks. However, the data availability will be significantly decreased after more spectrum discarding. Thus, is there a way to generate a graph that guarantees maximum spectrum and anonymity at the same time? To solve this problem, this paper proposes graph nonlinear scaling (GNS). We firmly prove that GNS can preserve all eigenvectors meanwhile providing high anonymity for the forged graph. Precisely, the GNS scales the eigenvalues of the original spectrum and constructs the forged graph with scaled eigenvalues and original eigenvectors. This approach maximizes the preservation of spectrum information to guarantee data availability. Meanwhile, it provides high robustness towards deanonymous attacks. The experimental results show that when SGF discards only 10% of the spectrum, the forged graph has high data availability. At this time, if the distance vector deanonymity algorithm is used to attack the forged graph, almost 100% of the nodes can be identified, while when achieving the same availability, only about 20% of the nodes in the forged graph obtained from GNS can be identified. Moreover, our method is better than SGF in capturing the real graph’s structure in terms of modularity, the number of partitions, and average clustering.


Author(s):  
Artem Khoroshev ◽  

The possibility of practical application of BLR-factorization (low-rank approximation of the matrix of un-knowns of a system of linear equations) for finite element modeling of the electromagnetic field topology of nonlinear magnetic systems is considered. A method for estimating the accuracy of the computed solution of the SLAE and the nature of the influence of the given accuracy of the low-rank approximation of the matrix of un-knowns on the upper limit of the relative forward error of the computed solution of the SLAE are shown. Using a model problem as an example, the dependence of the accuracy of calculating the integral characteristics of an electromechanical apparatus on the tolerance of the low-rank approximation of the matrix of unknowns is shown, as well as its effect on the convergence of the process of solving a nonlinear numerical problem. A quantitative assessment of the reduction in the computational complexity of the process of solving a numerical problem and the required amount of computer memory for solving the SLAE is carried out. The applicability of BLR-factorization for finite element modeling of the topology of the electromagnetic field without the use of numerical methods of the Krylov subspace is estimated.


2020 ◽  
Vol 14 (12) ◽  
pp. 2791-2798
Author(s):  
Xiaoqun Qiu ◽  
Zhen Chen ◽  
Saifullah Adnan ◽  
Hongwei He

2020 ◽  
Vol 6 ◽  
pp. 922-933
Author(s):  
M. Amine Hadj-Youcef ◽  
Francois Orieux ◽  
Alain Abergel ◽  
Aurelia Fraysse

2021 ◽  
Vol 11 (10) ◽  
pp. 4582
Author(s):  
Kensuke Tanioka ◽  
Satoru Hiwa

In the domain of functional magnetic resonance imaging (fMRI) data analysis, given two correlation matrices between regions of interest (ROIs) for the same subject, it is important to reveal relatively large differences to ensure accurate interpretation. However, clustering results based only on differences tend to be unsatisfactory and interpreting the features tends to be difficult because the differences likely suffer from noise. Therefore, to overcome these problems, we propose a new approach for dimensional reduction clustering. Methods: Our proposed dimensional reduction clustering approach consists of low-rank approximation and a clustering algorithm. The low-rank matrix, which reflects the difference, is estimated from the inner product of the difference matrix, not only from the difference. In addition, the low-rank matrix is calculated based on the majorize–minimization (MM) algorithm such that the difference is bounded within the range −1 to 1. For the clustering process, ordinal k-means is applied to the estimated low-rank matrix, which emphasizes the clustering structure. Results: Numerical simulations show that, compared with other approaches that are based only on differences, the proposed method provides superior performance in recovering the true clustering structure. Moreover, as demonstrated through a real-data example of brain activity measured via fMRI during the performance of a working memory task, the proposed method can visually provide interpretable community structures consisting of well-known brain functional networks, which can be associated with the human working memory system. Conclusions: The proposed dimensional reduction clustering approach is a very useful tool for revealing and interpreting the differences between correlation matrices, even when the true differences tend to be relatively small.


Sign in / Sign up

Export Citation Format

Share Document