L1-norm principal-component analysis in L2-norm-reduced-rank data subspaces

Author(s):  
Panos P. Markopoulos ◽  
Dimitris A. Pados ◽  
George N. Karystinos ◽  
Michael Langberg
Symmetry ◽  
2020 ◽  
Vol 12 (1) ◽  
pp. 182
Author(s):  
Fengmin Yu ◽  
Liming Liu ◽  
Nanxiang Yu ◽  
Lianghao Ji ◽  
Dong Qiu

Recently, with the popularization of intelligent terminals, research on intelligent big data has been paid more attention. Among these data, a kind of intelligent big data with functional characteristics, which is called functional data, has attracted attention. Functional data principal component analysis (FPCA), as an unsupervised machine learning method, plays a vital role in the analysis of functional data. FPCA is the primary step for functional data exploration, and the reliability of FPCA plays an important role in subsequent analysis. However, classical L2-norm functional data principal component analysis (L2-norm FPCA) is sensitive to outliers. Inspired by the multivariate data L1-norm principal component analysis methods, we propose an L1-norm functional data principal component analysis method (L1-norm FPCA). Because the proposed method utilizes L1-norm, the L1-norm FPCs are less sensitive to the outliers than L2-norm FPCs which are the characteristic functions of symmetric covariance operator. A corresponding algorithm for solving the L1-norm maximized optimization model is extended to functional data based on the idea of the multivariate data L1-norm principal component analysis method. Numerical experiments show that L1-norm FPCA proposed in this paper has a better robustness than L2-norm FPCA, and the reconstruction ability of the L1-norm principal component analysis to the original uncontaminated functional data is as good as that of the L2-norm principal component analysis.


Author(s):  
Hossein Kamrani ◽  
Alireza Zolghadreasli ◽  
Panos Markopoulos ◽  
Michael Langberg ◽  
Dimitris Pados ◽  
...  

2020 ◽  
Vol 39 (3) ◽  
pp. 3183-3193
Author(s):  
Jieya Li ◽  
Liming Yang

The classical principal component analysis (PCA) is not sparse enough since it is based on the L2-norm that is also prone to be adversely affected by the presence of outliers and noises. In order to address the problem, a sparse robust PCA framework is proposed based on the min of zero-norm regularization and the max of Lp-norm (0 < p ≤ 2) PCA. Furthermore, we developed a continuous optimization method, DC (difference of convex functions) programming algorithm (DCA), to solve the proposed problem. The resulting algorithm (called DC-LpZSPCA) is convergent linearly. In addition, when choosing different p values, the model can keep robust and is applicable to different data types. Numerical simulations are simulated in artificial data sets and Yale face data sets. Experiment results show that the proposed method can maintain good sparsity and anti-outlier ability.


2018 ◽  
Vol 2018 ◽  
pp. 1-9 ◽  
Author(s):  
Haijin Ji ◽  
Song Huang

Kernel entropy component analysis (KECA) is a newly proposed dimensionality reduction (DR) method, which has showed superiority in many pattern analysis issues previously solved by principal component analysis (PCA). The optimized KECA (OKECA) is a state-of-the-art variant of KECA and can return projections retaining more expressive power than KECA. However, OKECA is sensitive to outliers and accused of its high computational complexities due to its inherent properties of L2-norm. To handle these two problems, we develop a new extension to KECA, namely, KECA-L1, for DR or feature extraction. KECA-L1 aims to find a more robust kernel decomposition matrix such that the extracted features retain information potential as much as possible, which is measured by L1-norm. Accordingly, we design a nongreedy iterative algorithm which has much faster convergence than OKECA’s. Moreover, a general semisupervised classifier is developed for KECA-based methods and employed into the data classification. Extensive experiments on data classification and software defect prediction demonstrate that our new method is superior to most existing KECA- and PCA-based approaches. Code has been also made publicly available.


Sign in / Sign up

Export Citation Format

Share Document