rank decomposition
Recently Published Documents


TOTAL DOCUMENTS

169
(FIVE YEARS 62)

H-INDEX

16
(FIVE YEARS 4)

2021 ◽  
Vol 7 (12) ◽  
pp. 279
Author(s):  
Jobin Francis ◽  
Baburaj Madathil ◽  
Sudhish N. George ◽  
Sony George

The massive generation of data, which includes images and videos, has made data management, analysis, information extraction difficult in recent years. To gather relevant information, this large amount of data needs to be grouped. Real-life data may be noise corrupted during data collection or transmission, and the majority of them are unlabeled, allowing for the use of robust unsupervised clustering techniques. Traditional clustering techniques, which vectorize the images, are unable to keep the geometrical structure of the images. Hence, a robust tensor-based submodule clustering method based on l12 regularization with improved clustering capability is formulated. The l12 induced tensor nuclear norm (TNN), integrated into the proposed method, offers better low rankness while retaining the self-expressiveness property of submodules. Unlike existing methods, the proposed method employs a simultaneous noise removal technique by twisting the lateral image slices of the input data tensor into frontal slices and eliminates the noise content in each image, using the principles of the sparse and low rank decomposition technique. Experiments are carried out over three datasets with varying amounts of sparse, Gaussian and salt and pepper noise. The experimental results demonstrate the superior performance of the proposed method over the existing state-of-the-art methods.


電腦學刊 ◽  
2021 ◽  
Vol 32 (6) ◽  
pp. 195-205
Author(s):  
Bin Chen Bin Chen ◽  
Jin-Ning Zhu Bin Chen ◽  
Yi-Zhou Dong Jin-Ning Zhu


2021 ◽  
Vol 37 ◽  
pp. 598-612
Author(s):  
Irwin S. Pressman

This work studies the kernel of a linear operator associated with the generalized k-fold commutator. Given a set $\mathfrak{A}= \left\{ A_{1}, \ldots ,A_{k} \right\}$ of real $n \times n$ matrices, the commutator is denoted by$[A_{1}| \ldots |A_{k}]$. For a fixed set of matrices $\mathfrak{A}$ we introduce a multilinear skew-symmetric linear operator $T_{\mathfrak{A}}(X)=T(A_{1}, \ldots ,A_{k})[X]=[A_{1}| \ldots |A_{k} |X] $. For fixed $n$ and $k \ge 2n-1, \; T_{\mathfrak{A}} \equiv 0$ by the Amitsur--Levitski Theorem [2] , which motivated this work. The matrix representation $M$ of the linear transformation $T$ is called the k-commutator matrix. $M$ has interesting properties, e.g., it is a commutator; for $k$ odd, there is a permutation of the rows of $M$ that makes it skew-symmetric. For both $k$ and $n$ odd, a provocative matrix $\mathcal{S}$ appears in the kernel of $T$. By using the Moore--Penrose inverse and introducing a conjecture about the rank of $M$, the entries of $\mathcal{S}$ are shown to be quotients of polynomials in the entries of the matrices in $\mathfrak{A}$. One case of the conjecture has been recently proven by Brassil. The Moore--Penrose inverse provides a full rank decomposition of $M$.


Universe ◽  
2021 ◽  
Vol 7 (8) ◽  
pp. 302
Author(s):  
Dennis Obster ◽  
Naoki Sasakura

Tensor rank decomposition is a useful tool for geometric interpretation of the tensors in the canonical tensor model (CTM) of quantum gravity. In order to understand the stability of this interpretation, it is important to be able to estimate how many tensor rank decompositions can approximate a given tensor. More precisely, finding an approximate symmetric tensor rank decomposition of a symmetric tensor Q with an error allowance Δ is to find vectors ϕi satisfying ∥Q−∑i=1Rϕi⊗ϕi⋯⊗ϕi∥2≤Δ. The volume of all such possible ϕi is an interesting quantity which measures the amount of possible decompositions for a tensor Q within an allowance. While it would be difficult to evaluate this quantity for each Q, we find an explicit formula for a similar quantity by integrating over all Q of unit norm. The expression as a function of Δ is given by the product of a hypergeometric function and a power function. By combining new numerical analysis and previous results, we conjecture a formula for the critical rank, yielding an estimate for the spacetime degrees of freedom of the CTM. We also extend the formula to generic decompositions of non-symmetric tensors in order to make our results more broadly applicable. Interestingly, the derivation depends on the existence (convergence) of the partition function of a matrix model which previously appeared in the context of the CTM.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1189
Author(s):  
Xindi Ma ◽  
Jie Gao ◽  
Xiaoyu Liu ◽  
Taiping Zhang ◽  
Yuanyan Tang

Non-negative matrix factorization is used to find a basic matrix and a weight matrix to approximate the non-negative matrix. It has proven to be a powerful low-rank decomposition technique for non-negative multivariate data. However, its performance largely depends on the assumption of a fixed number of features. This work proposes a new probabilistic non-negative matrix factorization which factorizes a non-negative matrix into a low-rank factor matrix with constraints and a non-negative weight matrix. In order to automatically learn the potential binary features and feature number, a deterministic Indian buffet process variational inference is introduced to obtain the binary factor matrix. Further, the weight matrix is set to satisfy the exponential prior. To obtain the real posterior distribution of the two factor matrices, a variational Bayesian exponential Gaussian inference model is established. The comparative experiments on the synthetic and real-world datasets show the efficacy of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document