scholarly journals A parallel implementation of the matrix cross approximation method

Author(s):  
Д.А. Желтков ◽  
Е.Е. Тыртышников

Матричный крестовый метод является быстрым методом аппроксимации матриц матрицами малого ранга, его сложность составляет $O((m+n)r^2)$ операций. Важной особенностью является то, что если матрица задана не как хранящийся в памяти массив, а как функция от двух целочисленных аргументов, то можно найти еe малоранговое приближение, вычислив лишь $O((m+n)r)$ значений этой функции. Однако в случае сверхбольших размеров матрицы или крайней затратности вычисления еe элементов аппроксимация может занимать существенное время. Ускорить метод для подобных случаев можно с помощью параллельных алгоритмов. В настоящей статье предложен эффективный параллельный алгоритм для случая одинаковой сложности вычисления любого элемента матрицы. The matrix cross approximation method is a fast method based on low-rank matrix approximations with complexity $O((m+n)r^2)$ arithmetic operations. Its main feature consists in the following: if a matrix is not given as an array but is given as a function of two integer arguments, then this method allows one to compute the low-rank approximation of the given matrix by evaluating only $O((m+n)r)$ values of this function. However, if the matrix is extremely large or the evaluation of its elements is computationally expensive, then such an approximation becomes timeconsuming. For such cases, the performance of the method can be improved via parallelization. In this paper we propose an efficient parallel algorithm for the case of an equal computational cost for the evaluation of each matrix element.

2021 ◽  
Vol 11 (10) ◽  
pp. 4582
Author(s):  
Kensuke Tanioka ◽  
Satoru Hiwa

In the domain of functional magnetic resonance imaging (fMRI) data analysis, given two correlation matrices between regions of interest (ROIs) for the same subject, it is important to reveal relatively large differences to ensure accurate interpretation. However, clustering results based only on differences tend to be unsatisfactory and interpreting the features tends to be difficult because the differences likely suffer from noise. Therefore, to overcome these problems, we propose a new approach for dimensional reduction clustering. Methods: Our proposed dimensional reduction clustering approach consists of low-rank approximation and a clustering algorithm. The low-rank matrix, which reflects the difference, is estimated from the inner product of the difference matrix, not only from the difference. In addition, the low-rank matrix is calculated based on the majorize–minimization (MM) algorithm such that the difference is bounded within the range −1 to 1. For the clustering process, ordinal k-means is applied to the estimated low-rank matrix, which emphasizes the clustering structure. Results: Numerical simulations show that, compared with other approaches that are based only on differences, the proposed method provides superior performance in recovering the true clustering structure. Moreover, as demonstrated through a real-data example of brain activity measured via fMRI during the performance of a working memory task, the proposed method can visually provide interpretable community structures consisting of well-known brain functional networks, which can be associated with the human working memory system. Conclusions: The proposed dimensional reduction clustering approach is a very useful tool for revealing and interpreting the differences between correlation matrices, even when the true differences tend to be relatively small.


2021 ◽  
Author(s):  
Kensuke Tanioka ◽  
Satoru Hiwa

ABSTRACTIntroductionIn the domain of functional magnetic resonance imaging (fMRI) data analysis, given two correlation matrices between regions of interest (ROIs) for the same subject, it is important to reveal relatively large differences to ensure accurate interpretations. However, clustering results based only on difference tend to be unsatisfactory, and interpreting features is difficult because the difference suffers from noise. Therefore, to overcome these problems, we propose a new approach for dimensional reduction clustering.MethodsOur proposed dimensional reduction clustering approach consists of low rank approximation and a clustering algorithm. The low rank matrix, which reflects the difference, is estimated from the inner product of the difference matrix, not only the difference. In addition, the low rank matrix is calculated based on the majorize-minimization (MM) algorithm such that the difference is bounded from 1 to 1. For the clustering process, ordinal k-means is applied to the estimated low rank matrix, which emphasizes the clustering structure.ResultsNumerical simulations show that, compared with other approaches that are based only on difference, the proposed method provides superior performance in recovering the true clustering structure. Moreover, as demonstrated through a real data example of brain activity while performing a working memory task measured by fMRI, the proposed method can visually provide interpretable community structures consisted of well-known brain functional networks which can be associated with human working memory system.ConclusionsThe proposed dimensional reduction clustering approach is a very useful tool for revealing and interpreting the differences between correlation matrices, even if the true difference tends to be relatively small.


Author(s):  
Haruka Kawamura ◽  
Reiji Suda

AbstractLow-rank approximation by QR decomposition with pivoting (pivoted QR) is known to be less accurate than singular value decomposition (SVD); however, the calculation amount is smaller than that of SVD. The least upper bound of the ratio of the truncation error, defined by $$\Vert A-BC\Vert _2$$ ‖ A - B C ‖ 2 , using pivoted QR to that using SVD is proved to be $$\sqrt{\frac{4^k-1}{3}(n-k)+1}$$ 4 k - 1 3 ( n - k ) + 1 for $$A\in {\mathbb {R}}^{m\times n}$$ A ∈ R m × n $$(m\ge n)$$ ( m ≥ n ) , approximated as a product of $$B\in {\mathbb {R}}^{m\times k}$$ B ∈ R m × k and $$C\in {\mathbb {R}}^{k\times n}$$ C ∈ R k × n in this study.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Yongli Hu ◽  
Wei Zhou ◽  
Zheng Wen ◽  
Yanfeng Sun ◽  
Baocai Yin

Fingerprint-based positioning in a wireless local area network (WLAN) environment has received much attention recently. One key issue for the positioning method is the radio map construction, which generally requires significant effort to collect enough measurements of received signal strength (RSS). Based on the observation that RSSs have high spatial correlation, we propose an efficient radio map construction method based on low-rank approximation. Different from the conventional interpolation methods, the proposed method represents the distribution of RSSs as a low-rank matrix and constructs the dense radio map from relative sparse measurements by a revised low-rank matrix completion method. To evaluate the proposed method, both simulation tests and field experiments have been conducted. The experimental results indicate that the proposed method can reduce the RSS measurements evidently. Moreover, using the constructed radio maps for positioning, the positioning accuracy is also improved.


2021 ◽  
Vol 37 ◽  
pp. 544-548
Author(s):  
Pablo Soto-Quiros

In this paper, we propose an error analysis of the generalized low-rank approximation, which is a generalization of the classical approximation of a matrix $A\in\mathbb{R}^{m\times n}$ by a matrix of a rank at most $r$, where $r\leq\min\{m,n\}$.


2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Yong-Hong Duan ◽  
Rui-Ping Wen ◽  
Yun Xiao

The singular value thresholding (SVT) algorithm plays an important role in the well-known matrix reconstruction problem, and it has many applications in computer vision and recommendation systems. In this paper, an SVT with diagonal-update (D-SVT) algorithm was put forward, which allows the algorithm to make use of simple arithmetic operation and keep the computational cost of each iteration low. The low-rank matrix would be reconstructed well. The convergence of the new algorithm was discussed in detail. Finally, the numerical experiments show the effectiveness of the new algorithm for low-rank matrix completion.


Sign in / Sign up

Export Citation Format

Share Document