scholarly journals Low-Rank Approximation of Difference between Correlation Matrices Using Inner Product

2021 ◽  
Vol 11 (10) ◽  
pp. 4582
Author(s):  
Kensuke Tanioka ◽  
Satoru Hiwa

In the domain of functional magnetic resonance imaging (fMRI) data analysis, given two correlation matrices between regions of interest (ROIs) for the same subject, it is important to reveal relatively large differences to ensure accurate interpretation. However, clustering results based only on differences tend to be unsatisfactory and interpreting the features tends to be difficult because the differences likely suffer from noise. Therefore, to overcome these problems, we propose a new approach for dimensional reduction clustering. Methods: Our proposed dimensional reduction clustering approach consists of low-rank approximation and a clustering algorithm. The low-rank matrix, which reflects the difference, is estimated from the inner product of the difference matrix, not only from the difference. In addition, the low-rank matrix is calculated based on the majorize–minimization (MM) algorithm such that the difference is bounded within the range −1 to 1. For the clustering process, ordinal k-means is applied to the estimated low-rank matrix, which emphasizes the clustering structure. Results: Numerical simulations show that, compared with other approaches that are based only on differences, the proposed method provides superior performance in recovering the true clustering structure. Moreover, as demonstrated through a real-data example of brain activity measured via fMRI during the performance of a working memory task, the proposed method can visually provide interpretable community structures consisting of well-known brain functional networks, which can be associated with the human working memory system. Conclusions: The proposed dimensional reduction clustering approach is a very useful tool for revealing and interpreting the differences between correlation matrices, even when the true differences tend to be relatively small.

2021 ◽  
Author(s):  
Kensuke Tanioka ◽  
Satoru Hiwa

ABSTRACTIntroductionIn the domain of functional magnetic resonance imaging (fMRI) data analysis, given two correlation matrices between regions of interest (ROIs) for the same subject, it is important to reveal relatively large differences to ensure accurate interpretations. However, clustering results based only on difference tend to be unsatisfactory, and interpreting features is difficult because the difference suffers from noise. Therefore, to overcome these problems, we propose a new approach for dimensional reduction clustering.MethodsOur proposed dimensional reduction clustering approach consists of low rank approximation and a clustering algorithm. The low rank matrix, which reflects the difference, is estimated from the inner product of the difference matrix, not only the difference. In addition, the low rank matrix is calculated based on the majorize-minimization (MM) algorithm such that the difference is bounded from 1 to 1. For the clustering process, ordinal k-means is applied to the estimated low rank matrix, which emphasizes the clustering structure.ResultsNumerical simulations show that, compared with other approaches that are based only on difference, the proposed method provides superior performance in recovering the true clustering structure. Moreover, as demonstrated through a real data example of brain activity while performing a working memory task measured by fMRI, the proposed method can visually provide interpretable community structures consisted of well-known brain functional networks which can be associated with human working memory system.ConclusionsThe proposed dimensional reduction clustering approach is a very useful tool for revealing and interpreting the differences between correlation matrices, even if the true difference tends to be relatively small.


Author(s):  
Д.А. Желтков ◽  
Е.Е. Тыртышников

Матричный крестовый метод является быстрым методом аппроксимации матриц матрицами малого ранга, его сложность составляет $O((m+n)r^2)$ операций. Важной особенностью является то, что если матрица задана не как хранящийся в памяти массив, а как функция от двух целочисленных аргументов, то можно найти еe малоранговое приближение, вычислив лишь $O((m+n)r)$ значений этой функции. Однако в случае сверхбольших размеров матрицы или крайней затратности вычисления еe элементов аппроксимация может занимать существенное время. Ускорить метод для подобных случаев можно с помощью параллельных алгоритмов. В настоящей статье предложен эффективный параллельный алгоритм для случая одинаковой сложности вычисления любого элемента матрицы. The matrix cross approximation method is a fast method based on low-rank matrix approximations with complexity $O((m+n)r^2)$ arithmetic operations. Its main feature consists in the following: if a matrix is not given as an array but is given as a function of two integer arguments, then this method allows one to compute the low-rank approximation of the given matrix by evaluating only $O((m+n)r)$ values of this function. However, if the matrix is extremely large or the evaluation of its elements is computationally expensive, then such an approximation becomes timeconsuming. For such cases, the performance of the method can be improved via parallelization. In this paper we propose an efficient parallel algorithm for the case of an equal computational cost for the evaluation of each matrix element.


Author(s):  
Haruka Kawamura ◽  
Reiji Suda

AbstractLow-rank approximation by QR decomposition with pivoting (pivoted QR) is known to be less accurate than singular value decomposition (SVD); however, the calculation amount is smaller than that of SVD. The least upper bound of the ratio of the truncation error, defined by $$\Vert A-BC\Vert _2$$ ‖ A - B C ‖ 2 , using pivoted QR to that using SVD is proved to be $$\sqrt{\frac{4^k-1}{3}(n-k)+1}$$ 4 k - 1 3 ( n - k ) + 1 for $$A\in {\mathbb {R}}^{m\times n}$$ A ∈ R m × n $$(m\ge n)$$ ( m ≥ n ) , approximated as a product of $$B\in {\mathbb {R}}^{m\times k}$$ B ∈ R m × k and $$C\in {\mathbb {R}}^{k\times n}$$ C ∈ R k × n in this study.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Yongli Hu ◽  
Wei Zhou ◽  
Zheng Wen ◽  
Yanfeng Sun ◽  
Baocai Yin

Fingerprint-based positioning in a wireless local area network (WLAN) environment has received much attention recently. One key issue for the positioning method is the radio map construction, which generally requires significant effort to collect enough measurements of received signal strength (RSS). Based on the observation that RSSs have high spatial correlation, we propose an efficient radio map construction method based on low-rank approximation. Different from the conventional interpolation methods, the proposed method represents the distribution of RSSs as a low-rank matrix and constructs the dense radio map from relative sparse measurements by a revised low-rank matrix completion method. To evaluate the proposed method, both simulation tests and field experiments have been conducted. The experimental results indicate that the proposed method can reduce the RSS measurements evidently. Moreover, using the constructed radio maps for positioning, the positioning accuracy is also improved.


2021 ◽  
Vol 37 ◽  
pp. 544-548
Author(s):  
Pablo Soto-Quiros

In this paper, we propose an error analysis of the generalized low-rank approximation, which is a generalization of the classical approximation of a matrix $A\in\mathbb{R}^{m\times n}$ by a matrix of a rank at most $r$, where $r\leq\min\{m,n\}$.


Author(s):  
Jun Zhou ◽  
Longfei Li ◽  
Ziqi Liu ◽  
Chaochao Chen

Recently, Factorization Machine (FM) has become more and more popular for recommendation systems due to its effectiveness in finding informative interactions between features. Usually, the weights for the interactions are learned as a low rank weight matrix, which is formulated as an inner product of two low rank matrices. This low rank matrix can help improve the generalization ability of Factorization Machine. However, to choose the rank properly, it usually needs to run the algorithm for many times using different ranks, which clearly is inefficient for some large-scale datasets. To alleviate this issue, we propose an Adaptive Boosting framework of Factorization Machine (AdaFM), which can adaptively search for proper ranks for different datasets without re-training. Instead of using a fixed rank for FM, the proposed algorithm will gradually increase its rank according to its performance until the performance does not grow. Extensive experiments are conducted to validate the proposed method on multiple large-scale datasets. The experimental results demonstrate that the proposed method can be more effective than the state-of-the-art Factorization Machines.


2014 ◽  
Vol 2014 ◽  
pp. 1-11
Author(s):  
Jinjiang Li ◽  
Mengjun Li ◽  
Hui Fan

Existing image inpainting algorithm based on low-rank matrix approximation cannot be suitable for complex, large-scale, damaged texture image. An inpainting algorithm based on low-rank approximation and texture direction is proposed in the paper. At first, we decompose the image using low-rank approximation method. Then the area to be repaired is interpolated by level set algorithm, and we can reconstruct a new image by the boundary values of level set. In order to obtain a better restoration effect, we make iteration for low-rank decomposition and level set interpolation. Taking into account the impact of texture direction, we segment the texture and make low-rank decomposition at texture direction. Experimental results show that the new algorithm is suitable for texture recovery and maintaining the overall consistency of the structure, which can be used to repair large-scale damaged image.


Author(s):  
Yun Cai

This paper considers recovery of matrices that are low rank or approximately low rank from linear measurements corrupted with additive noise. We study minimization of the difference of Nuclear and Frobenius norms (abbreviated as [Formula: see text] norm) as a nonconvex and Lipschitz continuous metric for solving this noisy low rank matrix recovery problem. We mainly study two types of bounded observation noisy low rank matrix recovery problems, including the [Formula: see text]-norm bounded noise and the Dantizg Selector noise. Based on the matrix restricted isometry property (abbreviated as M-RIP), we prove that this [Formula: see text] norm-based minimization method can stably recover a (approximately) low rank matrix in the two types bounded noisy low rank matrix recovery problems. In addition, we use the truncated difference of Nuclear and Frobenius norms (denoted as the truncated [Formula: see text] norm) to recover a low rank matrix when the observation noise is the Dantizg Selector noise. We give the stable recovery result for this truncated [Formula: see text] norm minimization in Dantizg Selector noise case when the linear measurement map satisfies the M-RIP condition.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
E. Zhu ◽  
M. Xu ◽  
D. Pi

Noise exhibits low rank or no sparsity in the low-rank matrix recovery, and the nuclear norm is not an accurate rank approximation of low-rank matrix. In the present study, to solve the mentioned problem, a novel nonconvex approximation function of the low-rank matrix was proposed. Subsequently, based on the nonconvex rank approximation function, a novel model of robust principal component analysis was proposed. Such model was solved with the alternating direction method, and its convergence was verified theoretically. Subsequently, the background separation experiments were performed on the Wallflower and SBMnet datasets. Furthermore, the effectiveness of the novel model was verified by numerical experiments.


Sign in / Sign up

Export Citation Format

Share Document