A Novel Robust Low-rank Multi-view Diversity Optimization Model with Adaptive-Weighting Based Manifold Learning

2021 ◽  
pp. 108298
Author(s):  
Junpeng Tan ◽  
Zhijing Yang ◽  
Jinchang Ren ◽  
Bing Wang ◽  
Yongqiang Cheng ◽  
...  
2019 ◽  
Vol 11 (2) ◽  
pp. 192 ◽  
Author(s):  
Yixin Yang ◽  
Jianqi Zhang ◽  
Shangzhen Song ◽  
Delian Liu

Anomaly detection (AD), which aims to distinguish targets with significant spectral differences from the background, has become an important topic in hyperspectral imagery (HSI) processing. In this paper, a novel anomaly detection algorithm via dictionary construction-based low-rank representation (LRR) and adaptive weighting is proposed. This algorithm has three main advantages. First, based on the consistency with AD problem, the LRR is employed to mine the lowest-rank representation of hyperspectral data by imposing a low-rank constraint on the representation coefficients. Sparse component contains most of the anomaly information and can be used for anomaly detection. Second, to better separate the sparse anomalies from the background component, a background dictionary construction strategy based on the usage frequency of the dictionary atoms for HSI reconstruction is proposed. The constructed dictionary excludes possible anomalies and contains all background categories, thus spanning a more reasonable background space. Finally, to further enhance the response difference between the background pixels and anomalies, the response output obtained by LRR is multiplied by an adaptive weighting matrix. Therefore, the anomaly pixels are more easily distinguished from the background. Experiments on synthetic and real-world hyperspectral datasets demonstrate the superiority of our proposed method over other AD detectors.


2016 ◽  
Vol 2016 ◽  
pp. 1-7 ◽  
Author(s):  
Jun Zhu ◽  
Changwei Chen ◽  
Shoubao Su ◽  
Zinan Chang

In Wireless Body Area Networks (WBAN) the energy consumption is dominated by sensing and communication. Recently, a simultaneous cosparsity and low-rank (SCLR) optimization model has shown the state-of-the-art performance in compressive sensing (CS) recovery of multichannel EEG signals. How to solve the resulting regularization problem, involving l0 norm and rank function which is known as an NP-hard problem, is critical to the recovery results. SCLR takes use of l1 norm and nuclear norm as a convex surrogate function for l0 norm and rank function. However, l1 norm and nuclear norm cannot well approximate the l0 norm and rank because there exist irreparable gaps between them. In this paper, an optimization model with lq norm and schatten-p norm is proposed to enforce cosparsity and low-rank property in the reconstructed multichannel EEG signals. An efficient iterative scheme is used to solve the resulting nonconvex optimization problem. Experimental results have demonstrated that the proposed algorithm can significantly outperform existing state-of-the-art CS methods for compressive sensing of multichannel EEG channels.


2018 ◽  
Vol 2018 ◽  
pp. 1-8 ◽  
Author(s):  
Mingxia Chen ◽  
Jing Wang ◽  
Xueqing Li ◽  
Xiaolong Sun

In the recent years, manifold learning methods have been widely used in data classification to tackle the curse of dimensionality problem, since they can discover the potential intrinsic low-dimensional structures of the high-dimensional data. Given partially labeled data, the semi-supervised manifold learning algorithms are proposed to predict the labels of the unlabeled points, taking into account label information. However, these semi-supervised manifold learning algorithms are not robust against noisy points, especially when the labeled data contain noise. In this paper, we propose a framework for robust semi-supervised manifold learning (RSSML) to address this problem. The noisy levels of the labeled points are firstly predicted, and then a regularization term is constructed to reduce the impact of labeled points containing noise. A new robust semi-supervised optimization model is proposed by adding the regularization term to the traditional semi-supervised optimization model. Numerical experiments are given to show the improvement and efficiency of RSSML on noisy data sets.


1984 ◽  
Author(s):  
M. A. Montazer ◽  
Colin G. Drury
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document