scholarly journals Spatial-Spectral Multiple Manifold Discriminant Analysis for Dimensionality Reduction of Hyperspectral Imagery

2019 ◽  
Vol 11 (20) ◽  
pp. 2414 ◽  
Author(s):  
Guangyao Shi ◽  
Hong Huang ◽  
Jiamin Liu ◽  
Zhengying Li ◽  
Lihua Wang

Hyperspectral images (HSI) possess abundant spectral bands and rich spatial information, which can be utilized to discriminate different types of land cover. However, the high dimensional characteristics of spatial-spectral information commonly cause the Hughes phenomena. Traditional feature learning methods can reduce the dimensionality of HSI data and preserve the useful intrinsic information but they ignore the multi-manifold structure in hyperspectral image. In this paper, a novel dimensionality reduction (DR) method called spatial-spectral multiple manifold discriminant analysis (SSMMDA) was proposed for HSI classification. At first, several subsets are obtained from HSI data according to the prior label information. Then, a spectral-domain intramanifold graph is constructed for each submanifold to preserve the local neighborhood structure, a spatial-domain intramanifold scatter matrix and a spatial-domain intermanifold scatter matrix are constructed for each sub-manifold to characterize the within-manifold compactness and the between-manifold separability, respectively. Finally, a spatial-spectral combined objective function is designed for each submanifold to obtain an optimal projection and the discriminative features on different submanifolds are fused to improve the classification performance of HSI data. SSMMDA can explore spatial-spectral combined information and reveal the intrinsic multi-manifold structure in HSI. Experiments on three public HSI data sets demonstrate that the proposed SSMMDA method can achieve better classification accuracies in comparison with many state-of-the-art methods.

2021 ◽  
Vol 13 (7) ◽  
pp. 1363
Author(s):  
Guangyao Shi ◽  
Fulin Luo ◽  
Yiming Tang ◽  
Yuan Li

Graph learning is an effective dimensionality reduction (DR) manner to analyze the intrinsic properties of high dimensional data, it has been widely used in the fields of DR for hyperspectral image (HSI) data, but they ignore the collaborative relationship between sample pairs. In this paper, a novel supervised spectral DR method called local constrained manifold structure collaborative preserving embedding (LMSCPE) was proposed for HSI classification. At first, a novel local constrained collaborative representation (CR) model is designed based on the CR theory, which can obtain more effective collaborative coefficients to characterize the relationship between samples pairs. Then, an intraclass collaborative graph and an interclass collaborative graph are constructed to enhance the intraclass compactness and the interclass separability, and a local neighborhood graph is constructed to preserve the local neighborhood structure of HSI. Finally, an optimal objective function is designed to obtain a discriminant projection matrix, and the discriminative features of various land cover types can be obtained. LMSCPE can characterize the collaborative relationship between sample pairs and explore the intrinsic geometric structure in HSI. Experiments on three benchmark HSI data sets show that the proposed LMSCPE method is superior to the state-of-the-art DR methods for HSI classification.


2019 ◽  
Vol 11 (2) ◽  
pp. 109 ◽  
Author(s):  
Xiaoyan Li ◽  
Lefei Zhang ◽  
Jane You

A hyperspectral image (HSI) contains a great number of spectral bands for each pixel, which will limit the conventional image classification methods to distinguish land-cover types of each pixel. Dimensionality reduction is an effective way to improve the performance of classification. Linear discriminant analysis (LDA) is a popular dimensionality reduction method for HSI classification, which assumes all the samples obey the same distribution. However, different samples may have different contributions in the computation of scatter matrices. To address the problem of feature redundancy, a new supervised HSI classification method based on locally weighted discriminant analysis (LWDA) is presented. The proposed LWDA method constructs a weighted discriminant scatter matrix model and an optimal projection matrix model for each training sample, which is on the basis of discriminant information and spatial-spectral information. For each test sample, LWDA searches its nearest training sample with spatial information and then uses the corresponding projection matrix to project the test sample and all the training samples into a low-dimensional feature space. LWDA can effectively preserve the spatial-spectral local structures of the original HSI data and improve the discriminating power of the projected data for the final classification. Experimental results on two real-world HSI datasets show the effectiveness of the proposed LWDA method compared with some state-of-the-art algorithms. Especially when the data partition factor is small, i.e., 0.05, the overall accuracy obtained by LWDA increases by about 20 % for Indian Pines and 17 % for Kennedy Space Center (KSC) in comparison with the results obtained when directly using the original high-dimensional data.


2019 ◽  
Vol 11 (9) ◽  
pp. 1114
Author(s):  
Sixiu Hu ◽  
Jiangtao Peng ◽  
Yingxiong Fu ◽  
Luoqing Li

By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to the presence of noisy or inhomogeneous pixels around the central testing pixel in the spatial domain, the performance of KJSR is greatly affected. Motivated by the idea of self-paced learning (SPL), this paper proposes a self-paced KJSR (SPKJSR) model to adaptively learn weights and sparse coefficient vectors for different neighboring pixels in the kernel-based feature space. SPL strateges can learn a weight to indicate the difficulty of feature pixels within a spatial neighborhood. By assigning small weights for unimportant or complex pixels, the negative effect of inhomogeneous or noisy neighboring pixels can be suppressed. Hence, SPKJSR is usually much more robust. Experimental results on Indian Pines and Salinas hyperspectral data sets demonstrate that SPKJSR is much more effective than traditional JSR and KJSR models.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1262 ◽  
Author(s):  
Xiaoping Fang ◽  
Yaoming Cai ◽  
Zhihua Cai ◽  
Xinwei Jiang ◽  
Zhikun Chen

Hyperspectral image (HSI) consists of hundreds of narrow spectral band components with rich spectral and spatial information. Extreme Learning Machine (ELM) has been widely used for HSI analysis. However, the classical ELM is difficult to use for sparse feature leaning due to its randomly generated hidden layer. In this paper, we propose a novel unsupervised sparse feature learning approach, called Evolutionary Multiobjective-based ELM (EMO-ELM), and apply it to HSI feature extraction. Specifically, we represent the task of constructing the ELM Autoencoder (ELM-AE) as a multiobjective optimization problem that takes the sparsity of hidden layer outputs and the reconstruction error as two conflicting objectives. Then, we adopt an Evolutionary Multiobjective Optimization (EMO) method to solve the two objectives, simultaneously. To find the best solution from the Pareto solution set and construct the best trade-off feature extractor, a curvature-based method is proposed to focus on the knee area of the Pareto solutions. Benefited from the EMO, the proposed EMO-ELM is less prone to fall into a local minimum and has fewer trainable parameters than gradient-based AEs. Experiments on two real HSIs demonstrate that the features learned by EMO-ELM not only preserve better sparsity but also achieve superior separability than many existing feature learning methods.


2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Zhicheng Lu ◽  
Zhizheng Liang

Linear discriminant analysis has been widely studied in data mining and pattern recognition. However, when performing the eigen-decomposition on the matrix pair (within-class scatter matrix and between-class scatter matrix) in some cases, one can find that there exist some degenerated eigenvalues, thereby resulting in indistinguishability of information from the eigen-subspace corresponding to some degenerated eigenvalue. In order to address this problem, we revisit linear discriminant analysis in this paper and propose a stable and effective algorithm for linear discriminant analysis in terms of an optimization criterion. By discussing the properties of the optimization criterion, we find that the eigenvectors in some eigen-subspaces may be indistinguishable if the degenerated eigenvalue occurs. Inspired from the idea of the maximum margin criterion (MMC), we embed MMC into the eigen-subspace corresponding to the degenerated eigenvalue to exploit discriminability of the eigenvectors in the eigen-subspace. Since the proposed algorithm can deal with the degenerated case of eigenvalues, it not only handles the small-sample-size problem but also enables us to select projection vectors from the null space of the between-class scatter matrix. Extensive experiments on several face images and microarray data sets are conducted to evaluate the proposed algorithm in terms of the classification performance, and experimental results show that our method has smaller standard deviations than other methods in most cases.


2020 ◽  
Vol 12 (12) ◽  
pp. 2033 ◽  
Author(s):  
Xiaofei Yang ◽  
Xiaofeng Zhang ◽  
Yunming Ye ◽  
Raymond Y. K. Lau ◽  
Shijian Lu ◽  
...  

Accurate hyperspectral image classification has been an important yet challenging task for years. With the recent success of deep learning in various tasks, 2-dimensional (2D)/3-dimensional (3D) convolutional neural networks (CNNs) have been exploited to capture spectral or spatial information in hyperspectral images. On the other hand, few approaches make use of both spectral and spatial information simultaneously, which is critical to accurate hyperspectral image classification. This paper presents a novel Synergistic Convolutional Neural Network (SyCNN) for accurate hyperspectral image classification. The SyCNN consists of a hybrid module that combines 2D and 3D CNNs in feature learning and a data interaction module that fuses spectral and spatial hyperspectral information. Additionally, it introduces a 3D attention mechanism before the fully-connected layer which helps filter out interfering features and information effectively. Extensive experiments over three public benchmarking datasets show that our proposed SyCNNs clearly outperform state-of-the-art techniques that use 2D/3D CNNs.


Sign in / Sign up

Export Citation Format

Share Document