Robust sparse low-rank embedding for image dimension reduction

2021 ◽  
Vol 113 ◽  
pp. 107907
Author(s):  
Zhonghua Liu ◽  
Yue Lu ◽  
Zhihui Lai ◽  
Weihua Ou ◽  
Kaibing Zhang
2013 ◽  
Vol 42 (3) ◽  
pp. 320-325
Author(s):  
杜博 DU Bo ◽  
张乐飞 ZHANG Le-fei ◽  
张良培 ZHANG Liang-pei ◽  
胡文斌 HU Wen-bin

Author(s):  
Seyyed Ali Ahmadi ◽  
Nasser Mehrshad ◽  
Seyyed Mohammad Razavi

Containing hundreds of spectral bands (features), hyperspectral images (HSIs) have high ability in discrimination of land cover classes. Traditional HSIs data processing methods consider the same importance for all bands in the original feature space (OFS), while different spectral bands play different roles in identification of samples of different classes. In order to explore the relative importance of each feature, we learn a weighting matrix and obtain the relative weighted feature space (RWFS) as an enriched feature space for HSIs data analysis in this paper. To overcome the difficulty of limited labeled samples which is common case in HSIs data analysis, we extend our method to semisupervised framework. To transfer available knowledge to unlabeled samples, we employ graph based clustering where low rank representation (LRR) is used to define the similarity function for graph. After construction the RWFS, any arbitrary dimension reduction method and classification algorithm can be employed in RWFS. The experimental results on two well-known HSIs data set show that some dimension reduction algorithms have better performance in the new weighted feature space.


2018 ◽  
Vol 30 (2) ◽  
pp. 477-504 ◽  
Author(s):  
Hiroaki Sasaki ◽  
Voot Tangkaratt ◽  
Gang Niu ◽  
Masashi Sugiyama

Sufficient dimension reduction (SDR) is aimed at obtaining the low-rank projection matrix in the input space such that information about output data is maximally preserved. Among various approaches to SDR, a promising method is based on the eigendecomposition of the outer product of the gradient of the conditional density of output given input. In this letter, we propose a novel estimator of the gradient of the logarithmic conditional density that directly fits a linear-in-parameter model to the true gradient under the squared loss. Thanks to this simple least-squares formulation, its solution can be computed efficiently in a closed form. Then we develop a new SDR method based on the proposed gradient estimator. We theoretically prove that the proposed gradient estimator, as well as the SDR solution obtained from it, achieves the optimal parametric convergence rate. Finally, we experimentally demonstrate that our SDR method compares favorably with existing approaches in both accuracy and computational efficiency on a variety of artificial and benchmark data sets.


2020 ◽  
Vol 191 ◽  
pp. 105172 ◽  
Author(s):  
Lin Feng ◽  
Xiangzhu Meng ◽  
Huibing Wang
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document