scholarly journals A Spectral Spatial Attention Fusion with Deformable Convolutional Residual Network for Hyperspectral Image Classification

2021 ◽  
Vol 13 (18) ◽  
pp. 3590
Author(s):  
Tianyu Zhang ◽  
Cuiping Shi ◽  
Diling Liao ◽  
Liguo Wang

Convolutional neural networks (CNNs) have exhibited excellent performance in hyperspectral image classification. However, due to the lack of labeled hyperspectral data, it is difficult to achieve high classification accuracy of hyperspectral images with fewer training samples. In addition, although some deep learning techniques have been used in hyperspectral image classification, due to the abundant information of hyperspectral images, the problem of insufficient spatial spectral feature extraction still exists. To address the aforementioned issues, a spectral–spatial attention fusion with a deformable convolution residual network (SSAF-DCR) is proposed for hyperspectral image classification. The proposed network is composed of three parts, and each part is connected sequentially to extract features. In the first part, a dense spectral block is utilized to reuse spectral features as much as possible, and a spectral attention block that can refine and optimize the spectral features follows. In the second part, spatial features are extracted and selected by a dense spatial block and attention block, respectively. Then, the results of the first two parts are fused and sent to the third part, and deep spatial features are extracted by the DCR block. The above three parts realize the effective extraction of spectral–spatial features, and the experimental results for four commonly used hyperspectral datasets demonstrate that the proposed SSAF-DCR method is superior to some state-of-the-art methods with very few training samples.

2020 ◽  
Vol 12 (4) ◽  
pp. 664 ◽  
Author(s):  
Binge Cui ◽  
Jiandi Cui ◽  
Yan Lu ◽  
Nannan Guo ◽  
Maoguo Gong

Hyperspectral image classification methods may not achieve good performance when a limited number of training samples are provided. However, labeling sufficient samples of hyperspectral images to achieve adequate training is quite expensive and difficult. In this paper, we propose a novel sample pseudo-labeling method based on sparse representation (SRSPL) for hyperspectral image classification, in which sparse representation is used to select the purest samples to extend the training set. The proposed method consists of the following three steps. First, intrinsic image decomposition is used to obtain the reflectance components of hyperspectral images. Second, hyperspectral pixels are sparsely represented using an overcomplete dictionary composed of all training samples. Finally, information entropy is defined for the vectorized sparse representation, and then the pixels with low information entropy are selected as pseudo-labeled samples to augment the training set. The quality of the generated pseudo-labeled samples is evaluated based on classification accuracy, i.e., overall accuracy, average accuracy, and Kappa coefficient. Experimental results on four real hyperspectral data sets demonstrate excellent classification performance using the new added pseudo-labeled samples, which indicates that the generated samples are of high confidence.


2020 ◽  
Vol 12 (1) ◽  
pp. 125 ◽  
Author(s):  
Mu ◽  
Guo ◽  
Liu

Extracting spatial and spectral features through deep neural networks has become an effective means of classification of hyperspectral images. However, most networks rarely consider the extraction of multi-scale spatial features and cannot fully integrate spatial and spectral features. In order to solve these problems, this paper proposes a multi-scale and multi-level spectral-spatial feature fusion network (MSSN) for hyperspectral image classification. The network uses the original 3D cube as input data and does not need to use feature engineering. In the MSSN, using different scale neighborhood blocks as the input of the network, the spectral-spatial features of different scales can be effectively extracted. The proposed 3D–2D alternating residual block combines the spectral features extracted by the three-dimensional convolutional neural network (3D-CNN) with the spatial features extracted by the two-dimensional convolutional neural network (2D-CNN). It not only achieves the fusion of spectral features and spatial features but also achieves the fusion of high-level features and low-level features. Experimental results on four hyperspectral datasets show that this method is superior to several state-of-the-art classification methods for hyperspectral images.


2018 ◽  
Vol 10 (8) ◽  
pp. 1271 ◽  
Author(s):  
Feng Gao ◽  
Qun Wang ◽  
Junyu Dong ◽  
Qizhi Xu

Hyperspectral image classification has been acknowledged as the fundamental and challenging task of hyperspectral data processing. The abundance of spectral and spatial information has provided great opportunities to effectively characterize and identify ground materials. In this paper, we propose a spectral and spatial classification framework for hyperspectral images based on Random Multi-Graphs (RMGs). The RMG is a graph-based ensemble learning method, which is rarely considered in hyperspectral image classification. It is empirically verified that the semi-supervised RMG deals well with small sample setting problems. This kind of problem is very common in hyperspectral image applications. In the proposed method, spatial features are extracted based on linear prediction error analysis and local binary patterns; spatial features and spectral features are then stacked into high dimensional vectors. The high dimensional vectors are fed into the RMG for classification. By randomly selecting a subset of features to create a graph, the proposed method can achieve excellent classification performance. The experiments on three real hyperspectral datasets have demonstrated that the proposed method exhibits better performance than several closely related methods.


2021 ◽  
Vol 13 (18) ◽  
pp. 3561
Author(s):  
Ning Lv ◽  
Zhen Han ◽  
Chen Chen ◽  
Yijia Feng ◽  
Tao Su ◽  
...  

Hyperspectral image classification is essential for satellite Internet of Things (IoT) to build a large scale land-cover surveillance system. After acquiring real-time land-cover information, the edge of the network transmits all the hyperspectral images by satellites with low-latency and high-efficiency to the cloud computing center, which are provided by satellite IoT. A gigantic amount of remote sensing data bring challenges to the storage and processing capacity of traditional satellite systems. When hyperspectral images are used in annotation of land-cover application, data dimension reduction for classifier efficiency often leads to the decrease of classifier accuracy, especially the region to be annotated consists of natural landform and artificial structure. This paper proposes encoding spectral-spatial features for hyperspectral image classification in the satellite Internet of Things system to extract features effectively, namely attribute profile stacked autoencoder (AP-SAE). Firstly, extended morphology attribute profiles EMAP is used to obtain spatial features of different attribute scales. Secondly, AP-SAE is used to extract spectral features with similar spatial attributes. In this stage the program can learn feature mappings, on which the pixels from the same land-cover class are mapped as closely as possible and the pixels from different land-cover categories are separated by a large margin. Finally, the program trains an effective classifier by using the network of the AP-SAE. Experimental results on three widely-used hyperspectral image (HSI) datasets and comprehensive comparisons with existing methods demonstrate that our proposed method can be used effectively in hyperspectral image classification.


2021 ◽  
Vol 13 (4) ◽  
pp. 820
Author(s):  
Yaokang Zhang ◽  
Yunjie Chen

This paper presents a composite kernel method (MWASCK) based on multiscale weighted adjacent superpixels (ASs) to classify hyperspectral image (HSI). The MWASCK adequately exploits spatial-spectral features of weighted adjacent superpixels to guarantee that more accurate spectral features can be extracted. Firstly, we use a superpixel segmentation algorithm to divide HSI into multiple superpixels. Secondly, the similarities between each target superpixel and its ASs are calculated to construct the spatial features. Finally, a weighted AS-based composite kernel (WASCK) method for HSI classification is proposed. In order to avoid seeking for the optimal superpixel scale and fuse the multiscale spatial features, the MWASCK method uses multiscale weighted superpixel neighbor information. Experiments from two real HSIs indicate that superior performance of the WASCK and MWASCK methods compared with some popular classification methods.


Author(s):  
T. Alipourfard ◽  
H. Arefi

Abstract. Convolutional Neural Networks (CNNs) as a well-known deep learning technique has shown a remarkable performance in visual recognition applications. However, using such networks in the area of hyperspectral image classification is a challenging and time-consuming process due to the high dimensionality and the insufficient training samples. In addition, Generative Adversarial Networks (GANs) has attracted a lot of attentions in order to generate virtual training samples. In this paper, we present a new classification framework based on integration of multi-channel CNNs and new architecture for generator and discriminator of GANs to overcome Small Sample Size (SSS) problem in hyperspectral image classification. Further, in order to reduce the computational cost, the methods related to the reduction of subspace dimension were proposed to obtain the dominant feature around the training sample to generate meaningful training samples from the original one. The proposed framework overcomes SSS and overfitting problem in classifying hyperspectral images. Based on the experimental results on real and well-known hyperspectral benchmark images, our proposed strategy improves the performance compared to standard CNNs and conventional data augmentation strategy. The overall classification accuracy in Pavia University and Indian Pines datasets was 99.8% and 94.9%, respectively.


2021 ◽  
Vol 42 (15) ◽  
pp. 5604-5625
Author(s):  
Hongmin Gao ◽  
Mingxia Wang ◽  
Yao Yang ◽  
Xueying Cao ◽  
Chenming Li

Sign in / Sign up

Export Citation Format

Share Document