scholarly journals VIRTUAL TRAINING SAMPLE GENERATION BY GENERATIVE ADVERSARIAL NETWORKS FOR HYPERSPECTRAL IMAGES CLASSIFICATION

Author(s):  
T. Alipourfard ◽  
H. Arefi

Abstract. Convolutional Neural Networks (CNNs) as a well-known deep learning technique has shown a remarkable performance in visual recognition applications. However, using such networks in the area of hyperspectral image classification is a challenging and time-consuming process due to the high dimensionality and the insufficient training samples. In addition, Generative Adversarial Networks (GANs) has attracted a lot of attentions in order to generate virtual training samples. In this paper, we present a new classification framework based on integration of multi-channel CNNs and new architecture for generator and discriminator of GANs to overcome Small Sample Size (SSS) problem in hyperspectral image classification. Further, in order to reduce the computational cost, the methods related to the reduction of subspace dimension were proposed to obtain the dominant feature around the training sample to generate meaningful training samples from the original one. The proposed framework overcomes SSS and overfitting problem in classifying hyperspectral images. Based on the experimental results on real and well-known hyperspectral benchmark images, our proposed strategy improves the performance compared to standard CNNs and conventional data augmentation strategy. The overall classification accuracy in Pavia University and Indian Pines datasets was 99.8% and 94.9%, respectively.

2020 ◽  
Vol 12 (4) ◽  
pp. 664 ◽  
Author(s):  
Binge Cui ◽  
Jiandi Cui ◽  
Yan Lu ◽  
Nannan Guo ◽  
Maoguo Gong

Hyperspectral image classification methods may not achieve good performance when a limited number of training samples are provided. However, labeling sufficient samples of hyperspectral images to achieve adequate training is quite expensive and difficult. In this paper, we propose a novel sample pseudo-labeling method based on sparse representation (SRSPL) for hyperspectral image classification, in which sparse representation is used to select the purest samples to extend the training set. The proposed method consists of the following three steps. First, intrinsic image decomposition is used to obtain the reflectance components of hyperspectral images. Second, hyperspectral pixels are sparsely represented using an overcomplete dictionary composed of all training samples. Finally, information entropy is defined for the vectorized sparse representation, and then the pixels with low information entropy are selected as pseudo-labeled samples to augment the training set. The quality of the generated pseudo-labeled samples is evaluated based on classification accuracy, i.e., overall accuracy, average accuracy, and Kappa coefficient. Experimental results on four real hyperspectral data sets demonstrate excellent classification performance using the new added pseudo-labeled samples, which indicates that the generated samples are of high confidence.


2018 ◽  
Vol 10 (8) ◽  
pp. 1271 ◽  
Author(s):  
Feng Gao ◽  
Qun Wang ◽  
Junyu Dong ◽  
Qizhi Xu

Hyperspectral image classification has been acknowledged as the fundamental and challenging task of hyperspectral data processing. The abundance of spectral and spatial information has provided great opportunities to effectively characterize and identify ground materials. In this paper, we propose a spectral and spatial classification framework for hyperspectral images based on Random Multi-Graphs (RMGs). The RMG is a graph-based ensemble learning method, which is rarely considered in hyperspectral image classification. It is empirically verified that the semi-supervised RMG deals well with small sample setting problems. This kind of problem is very common in hyperspectral image applications. In the proposed method, spatial features are extracted based on linear prediction error analysis and local binary patterns; spatial features and spectral features are then stacked into high dimensional vectors. The high dimensional vectors are fed into the RMG for classification. By randomly selecting a subset of features to create a graph, the proposed method can achieve excellent classification performance. The experiments on three real hyperspectral datasets have demonstrated that the proposed method exhibits better performance than several closely related methods.


2021 ◽  
Vol 13 (18) ◽  
pp. 3590
Author(s):  
Tianyu Zhang ◽  
Cuiping Shi ◽  
Diling Liao ◽  
Liguo Wang

Convolutional neural networks (CNNs) have exhibited excellent performance in hyperspectral image classification. However, due to the lack of labeled hyperspectral data, it is difficult to achieve high classification accuracy of hyperspectral images with fewer training samples. In addition, although some deep learning techniques have been used in hyperspectral image classification, due to the abundant information of hyperspectral images, the problem of insufficient spatial spectral feature extraction still exists. To address the aforementioned issues, a spectral–spatial attention fusion with a deformable convolution residual network (SSAF-DCR) is proposed for hyperspectral image classification. The proposed network is composed of three parts, and each part is connected sequentially to extract features. In the first part, a dense spectral block is utilized to reuse spectral features as much as possible, and a spectral attention block that can refine and optimize the spectral features follows. In the second part, spatial features are extracted and selected by a dense spatial block and attention block, respectively. Then, the results of the first two parts are fused and sent to the third part, and deep spatial features are extracted by the DCR block. The above three parts realize the effective extraction of spectral–spatial features, and the experimental results for four commonly used hyperspectral datasets demonstrate that the proposed SSAF-DCR method is superior to some state-of-the-art methods with very few training samples.


Sign in / Sign up

Export Citation Format

Share Document