scholarly journals Robust Subspace Tracking Algorithms in Signal Processing: A Brief Survey

Author(s):  
Le Trung Thanh ◽  
Nguyen Viet Dung ◽  
Nguyen Linh Trung ◽  
Karim Abed-Meraim

Principal component analysis (PCA) and subspace estimation (SE) are popular data analysis tools and used in a wide range of applications. The main interest in PCA/SE is for dimensionality reduction and low-rank approximation purposes. The emergence of big data streams have led to several essential issues for performing PCA/SE. Among them are (i) the size of such data streams increases over time, (ii) the underlying models may be time-dependent, and (iii) problem of dealing with the uncertainty and incompleteness in data. A robust variant of PCA/SE for such data streams, namely robust online PCA or robust subspace tracking (RST), has been introduced as a good alternative. The main goal of this paper is to provide a brief survey on recent RST algorithms in signal processing. Particularly, we begin this survey by introducing the basic ideas of the RST problem. Then, different aspects of RST are reviewed with respect to different kinds of non-Gaussian noises and sparse constraints. Our own contributions on this topic are also highlighted.

2007 ◽  
Vol 87 (8) ◽  
pp. 1890-1903 ◽  
Author(s):  
Motoaki Kawanabe ◽  
Fabian J. Theis

2012 ◽  
Vol 21 (06) ◽  
pp. 1250033
Author(s):  
MANOLIS G. VOZALIS ◽  
ANGELOS I. MARKOS ◽  
KONSTANTINOS G. MARGARITIS

Collaborative Filtering (CF) is a popular technique employed by Recommender Systems, a term used to describe intelligent methods that generate personalized recommendations. Some of the most efficient approaches to CF are based on latent factor models and nearest neighbor methods, and have received considerable attention in recent literature. Latent factor models can tackle some fundamental challenges of CF, such as data sparsity and scalability. In this work, we present an optimal scaling framework to address these problems using Categorical Principal Component Analysis (CatPCA) for the low-rank approximation of the user-item ratings matrix, followed by a neighborhood formation step. CatPCA is a versatile technique that utilizes an optimal scaling process where original data are transformed so that their overall variance is maximized. We considered both smooth and non-smooth transformations for the observed variables (items), such as numeric, (spline) ordinal, (spline) nominal and multiple nominal. The method was extended to handle missing data and incorporate differential weighting for items. Experiments were executed on three data sets of different sparsity and size, MovieLens 100k, 1M and Jester, aiming to evaluate the aforementioned options in terms of accuracy. A combined approach with a multiple nominal transformation and a "passive" missing data strategy clearly outperformed the other tested options for all three data sets. The results are comparable with those reported for single methods in the CF literature.


2020 ◽  
Vol 12 (14) ◽  
pp. 2264
Author(s):  
Hongyi Liu ◽  
Hanyang Li ◽  
Zebin Wu ◽  
Zhihui Wei

Low-rank tensors have received more attention in hyperspectral image (HSI) recovery. Minimizing the tensor nuclear norm, as a low-rank approximation method, often leads to modeling bias. To achieve an unbiased approximation and improve the robustness, this paper develops a non-convex relaxation approach for low-rank tensor approximation. Firstly, a non-convex approximation of tensor nuclear norm (NCTNN) is introduced to the low-rank tensor completion. Secondly, a non-convex tensor robust principal component analysis (NCTRPCA) method is proposed, which aims at exactly recovering a low-rank tensor corrupted by mixed-noise. The two proposed models are solved efficiently by the alternating direction method of multipliers (ADMM). Three HSI datasets are employed to exhibit the superiority of the proposed model over the low rank penalization method in terms of accuracy and robustness.


Entropy ◽  
2021 ◽  
Vol 23 (8) ◽  
pp. 990
Author(s):  
Théo Galy-Fajou ◽  
Valerio Perrone ◽  
Manfred Opper

Variational inference is a powerful framework, used to approximate intractable posteriors through variational distributions. The de facto standard is to rely on Gaussian variational families, which come with numerous advantages: they are easy to sample from, simple to parametrize, and many expectations are known in closed-form or readily computed by quadrature. In this paper, we view the Gaussian variational approximation problem through the lens of gradient flows. We introduce a flexible and efficient algorithm based on a linear flow leading to a particle-based approximation. We prove that, with a sufficient number of particles, our algorithm converges linearly to the exact solution for Gaussian targets, and a low-rank approximation otherwise. In addition to the theoretical analysis, we show, on a set of synthetic and real-world high-dimensional problems, that our algorithm outperforms existing methods with Gaussian targets while performing on a par with non-Gaussian targets.


2021 ◽  
Vol 13 (4) ◽  
pp. 819
Author(s):  
Ryota Yuzuriha ◽  
Ryuji Kurihara ◽  
Ryo Matsuoka ◽  
Masahiro Okuda

We introduce a novel regularization function for hyperspectral image (HSI), which is based on the nuclear norms of gradient images. Unlike conventional low-rank priors, we achieve a gradient-based low-rank approximation by minimizing the sum of nuclear norms associated with rotated planes in the gradient of a HSI. Our method explicitly and simultaneously exploits the correlation in the spectral domain as well as the spatial domain. Our method exploits the low-rankness of a global region to enhance the dimensionality reduction by the prior. Since our method considers the low-rankness in the gradient domain, it more sensitively detects anomalous variations. Our method achieves high-fidelity image recovery using a single regularization function without the explicit use of any sparsity-inducing priors such as ℓ0, ℓ1 and total variation (TV) norms. We also apply this regularization to a gradient-based robust principal component analysis and show its superiority in HSI decomposition. To demonstrate, the proposed regularization is validated on a variety of HSI reconstruction/decomposition problems with performance comparisons to state-of-the-art methods its superior performance.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Fahrettin Horasan ◽  
Hasan Erbay ◽  
Fatih Varçın ◽  
Emre Deniz

The latent semantic analysis (LSA) is a mathematical/statistical way of discovering hidden concepts between terms and documents or within a document collection (i.e., a large corpus of text). Each document of the corpus and terms are expressed as a vector with elements corresponding to these concepts to form a term-document matrix. Then, the LSA uses a low-rank approximation to the term-document matrix in order to remove irrelevant information, to extract more important relations, and to reduce the computational time. The irrelevant information is called as “noise” and does not have a noteworthy effect on the meaning of the document collection. This is an essential step in the LSA. The singular value decomposition (SVD) has been the main tool obtaining the low-rank approximation in the LSA. Since the document collection is dynamic (i.e., the term-document matrix is subject to repeated updates), we need to renew the approximation. This can be done via recomputing the SVD or updating the SVD. However, the computational time of recomputing or updating the SVD of the term-document matrix is very high when adding new terms and/or documents to preexisting document collection. Therefore, this issue opened the door of using other matrix decompositions for the LSA as ULV- and URV-based decompositions. This study shows that the truncated ULV decomposition (TULVD) is a good alternative to the SVD in the LSA modeling.


Sign in / Sign up

Export Citation Format

Share Document