scholarly journals Hyperspectral Image Classification via Multi-Feature-Based Correlation Adaptive Representation

2021 ◽  
Vol 13 (7) ◽  
pp. 1253
Author(s):  
Guichi Liu ◽  
Lei Gao ◽  
Lin Qi

In recent years, representation-based methods have attracted more attention in the hyperspectral image (HSI) classification. Among them, sparse representation-based classifier (SRC) and collaborative representation-based classifier (CRC) are the two representative methods. However, SRC only focuses on sparsity but ignores the data correlation information. While CRC encourages grouping correlated variables together but lacks the ability of variable selection. As a result, SRC and CRC are incapable of producing satisfied performance. To address these issues, in this work, a correlation adaptive representation (CAR) is proposed, enabling a CAR-based classifier (CARC). Specifically, the proposed CARC is able to explore sparsity and data correlation information jointly, generating a novel representation model that is adaptive to the structure of the dictionary. To further exploit the correlation between the test samples and the training samples effectively, a distance-weighted Tikhonov regularization is integrated into the proposed CARC. Furthermore, to handle the small training sample problem in the HSI classification, a multi-feature correlation adaptive representation-based classifier (MFCARC) and MFCARC with Tikhonov regularization (MFCART) are presented to improve the classification performance by exploring the complementary information across multiple features. The experimental results show the superiority of the proposed methods over state-of-the-art algorithms.

Author(s):  
P. Zhong ◽  
Z. Q. Gong ◽  
C. Schönlieb

In recent years, researches in remote sensing demonstrated that deep architectures with multiple layers can potentially extract abstract and invariant features for better hyperspectral image classification. Since the usual real-world hyperspectral image classification task cannot provide enough training samples for a supervised deep model, such as convolutional neural networks (CNNs), this work turns to investigate the deep belief networks (DBNs), which allow unsupervised training. The DBN trained over limited training samples usually has many “dead” (never responding) or “potential over-tolerant” (always responding) latent factors (neurons), which decrease the DBN’s description ability and thus finally decrease the hyperspectral image classification performance. This work proposes a new diversified DBN through introducing a diversity promoting prior over the latent factors during the DBN pre-training and fine-tuning procedures. The diversity promoting prior in the training procedures will encourage the latent factors to be uncorrelated, such that each latent factor focuses on modelling unique information, and all factors will be summed up to capture a large proportion of information and thus increase description ability and classification performance of the diversified DBNs. The proposed method was evaluated over the well-known real-world hyperspectral image dataset. The experiments demonstrate that the diversified DBNs can obtain much better results than original DBNs and comparable or even better performances compared with other recent hyperspectral image classification methods.


Author(s):  
Haoliang Yuan

Sparse representation classification (SRC) has been successfully applied into hyperspectral image (HSI). A test sample (pixel) can be linearly represented by a few training samples of the training set. The class label of the test sample is then decided by the reconstruction residuals. To incorporate the spatial information to improve the classification performance, a patch matrix, which includes a spatial neighborhood set, is used to replace the original pixel. Generally, the objective function of the reconstruction residuals is represented as Frobenius-norm, which actually treats the elements in the reconstruction residuals in the same way. However, when a patch locates in the image edge, the samples in the patch may belong to different classes. Frobenius-norm is not suitable to compute the reconstruction residuals. In this paper, we propose a robust patch-based sparse representation classification (RPSRC) based on [Formula: see text]-norm. An iteration algorithm is given to compute RPSRC efficiently. Extensive experimental results on two real-life HSI datasets demonstrate the effectiveness of RPSRC.


2013 ◽  
Vol 347-350 ◽  
pp. 2241-2245
Author(s):  
Xiao Yuan Jing ◽  
Xiang Long Ge ◽  
Yong Fang Yao ◽  
Feng Nan Yu

When the number of labeled training samples is very small, the sample information people can use would be very little and the recognition rates of traditional image recognition methods are not satisfactory. However, there is often some related information contained in other databases that is helpful to feature extraction. Thus, it is considered to take full advantage of the data information in other databases by transfer learning. In this paper, the idea of transferring the samples is employed and further we propose a feature extraction approach based on sample set reconstruction. We realize the approach by reconstructing the training sample set using the difference information among the samples of other databases. Experimental results on three widely used face databases AR, FERET, CAS-PEAL are presented to demonstrate the efficacy of the proposed approach in classification performance.


2020 ◽  
Vol 12 (14) ◽  
pp. 2327
Author(s):  
Ming-Der Yang ◽  
Kai-Hsiang Huang ◽  
Hui-Ping Tsai

The critical issue facing hyperspectral image (HSI) classification is the imbalance between dimensionality and the number of available training samples. This study attempted to solve the issue by proposing an integrating method using minimum noise fractions (MNF) and Hilbert–Huang transform (HHT) transformations into artificial neural networks (ANNs) for HSI classification tasks. MNF and HHT function as a feature extractor and image decomposer, respectively, to minimize influences of noises and dimensionality and to maximize training sample efficiency. Experimental results using two benchmark datasets, Indian Pine (IP) and Pavia University (PaviaU) hyperspectral images, are presented. With the intention of optimizing the number of essential neurons and training samples in the ANN, 1 to 1000 neurons and four proportions of training sample were tested, and the associated classification accuracies were evaluated. For the IP dataset, the results showed a remarkable classification accuracy of 99.81% with a 30% training sample from the MNF1–14+HHT-transformed image set using 500 neurons. Additionally, a high accuracy of 97.62% using only a 5% training sample was achieved for the MNF1–14+HHT-transformed images. For the PaviaU dataset, the highest classification accuracy was 98.70% with a 30% training sample from the MNF1–14+HHT-transformed image using 800 neurons. In general, the accuracy increased as the neurons increased, and as the training samples increased. However, the accuracy improvement curve became relatively flat when more than 200 neurons were used, which revealed that using more discriminative information from transformed images can reduce the number of neurons needed to adequately describe the data as well as reducing the complexity of the ANN model. Overall, the proposed method opens new avenues in the use of MNF and HHT transformations for HSI classification with outstanding accuracy performance using an ANN.


2021 ◽  
Vol 13 (21) ◽  
pp. 4262
Author(s):  
Hanjie Wu ◽  
Dan Li ◽  
Yujian Wang ◽  
Xiaojun Li ◽  
Fanqiang Kong ◽  
...  

Although most of deep-learning-based hyperspectral image (HSI) classification methods achieve great performance, there still remains a challenge to utilize small-size training samples to remarkably enhance the classification accuracy. To tackle this challenge, a novel two-branch spectral–spatial-feature attention network (TSSFAN) for HSI classification is proposed in this paper. Firstly, two inputs with different spectral dimensions and spatial sizes are constructed, which can not only reduce the redundancy of the original dataset but also accurately explore the spectral and spatial features. Then, we design two parallel 3DCNN branches with attention modules, in which one focuses on extracting spectral features and adaptively learning the more discriminative spectral channels, and the other focuses on exploring spatial features and adaptively learning the more discriminative spatial structures. Next, the feature attention module is constructed to automatically adjust the weights of different features based on their contributions for classification to remarkably improve the classification performance. Finally, we design the hybrid architecture of 3D–2DCNN to acquire the final classification result, which can significantly decrease the sophistication of the network. Experimental results on three HSI datasets indicate that our presented TSSFAN method outperforms several of the most advanced classification methods.


2019 ◽  
Vol 11 (11) ◽  
pp. 1325 ◽  
Author(s):  
Chen Chen ◽  
Yi Ma ◽  
Guangbo Ren

Deep learning models, especially the convolutional neural networks (CNNs), are very active in hyperspectral remote sensing image classification. In order to better apply the CNN model to hyperspectral classification, we propose a CNN model based on Fletcher–Reeves algorithm (F–R CNN), which uses the Fletcher–Reeves (F–R) algorithm for gradient updating to optimize the convergence performance of the model in classification. In view of the fact that there are fewer optional training samples in practical applications, we further propose a method of increasing the number of samples by adding a certain degree of perturbed samples, which can also test the anti-interference ability of classification methods. Furthermore, we analyze the anti-interference and convergence performance of the proposed model in terms of different training sample data sets, different batch training sample numbers and iteration time. In this paper, we describe the experimental process in detail and comprehensively evaluate the proposed model based on the classification of CHRIS hyperspectral imagery covering coastal wetlands, and further evaluate it on a commonly used hyperspectral image benchmark dataset. The experimental results show that the accuracy of the two models after increasing training samples and adjusting the number of batch training samples is improved. When the number of batch training samples is continuously increased to 350, the classification accuracy of the proposed method can still be maintained above 80.7%, which is 2.9% higher than the traditional one. And its time consumption is less than that of the traditional one while ensuring classification accuracy. It can be concluded that the proposed method has anti-interference ability and outperforms the traditional CNN in terms of batch computing adaptability and convergence speed.


Author(s):  
P. Zhong ◽  
Z. Q. Gong ◽  
C. Schönlieb

In recent years, researches in remote sensing demonstrated that deep architectures with multiple layers can potentially extract abstract and invariant features for better hyperspectral image classification. Since the usual real-world hyperspectral image classification task cannot provide enough training samples for a supervised deep model, such as convolutional neural networks (CNNs), this work turns to investigate the deep belief networks (DBNs), which allow unsupervised training. The DBN trained over limited training samples usually has many “dead” (never responding) or “potential over-tolerant” (always responding) latent factors (neurons), which decrease the DBN’s description ability and thus finally decrease the hyperspectral image classification performance. This work proposes a new diversified DBN through introducing a diversity promoting prior over the latent factors during the DBN pre-training and fine-tuning procedures. The diversity promoting prior in the training procedures will encourage the latent factors to be uncorrelated, such that each latent factor focuses on modelling unique information, and all factors will be summed up to capture a large proportion of information and thus increase description ability and classification performance of the diversified DBNs. The proposed method was evaluated over the well-known real-world hyperspectral image dataset. The experiments demonstrate that the diversified DBNs can obtain much better results than original DBNs and comparable or even better performances compared with other recent hyperspectral image classification methods.


Sign in / Sign up

Export Citation Format

Share Document