scholarly journals Double-Branch Network with Pyramidal Convolution and Iterative Attention for Hyperspectral Image Classification

2021 ◽  
Vol 13 (7) ◽  
pp. 1403
Author(s):  
Hao Shi ◽  
Guo Cao ◽  
Zixian Ge ◽  
Youqiang Zhang ◽  
Peng Fu

Deep-learning methods, especially convolutional neural networks (CNN), have become the first choice for hyperspectral image (HSI) classification to date. It is a common procedure that small cubes are cropped from hyperspectral images and then fed into CNNs. However, standard CNNs find it difficult to extract discriminative spectral–spatial features. How to obtain finer spectral–spatial features to improve the classification performance is now a hot topic of research. In this regard, the attention mechanism, which has achieved excellent performance in other computer vision, holds the exciting prospect. In this paper, we propose a double-branch network consisting of a novel convolution named pyramidal convolution (PyConv) and an iterative attention mechanism. Each branch concentrates on exploiting spectral or spatial features with different PyConvs, supplemented by the attention module for refining the feature map. Experimental results demonstrate that our model can yield competitive performance compared to other state-of-the-art models.

2021 ◽  
Vol 13 (21) ◽  
pp. 4472
Author(s):  
Tianyu Zhang ◽  
Cuiping Shi ◽  
Diling Liao ◽  
Liguo Wang

Convolutional neural networks (CNNs) have been widely used in hyperspectral image classification in recent years. The training of CNNs relies on a large amount of labeled sample data. However, the number of labeled samples of hyperspectral data is relatively small. Moreover, for hyperspectral images, fully extracting spectral and spatial feature information is the key to achieve high classification performance. To solve the above issues, a deep spectral spatial inverted residuals network (DSSIRNet) is proposed. In this network, a data block random erasing strategy is introduced to alleviate the problem of limited labeled samples by data augmentation of small spatial blocks. In addition, a deep inverted residuals (DIR) module for spectral spatial feature extraction is proposed, which locks the effective features of each layer while avoiding network degradation. Furthermore, a global 3D attention module is proposed, which can realize the fine extraction of spectral and spatial global context information under the condition of the same number of input and output feature maps. Experiments are carried out on four commonly used hyperspectral datasets. A large number of experimental results show that compared with some state-of-the-art classification methods, the proposed method can provide higher classification accuracy for hyperspectral images.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1734 ◽  
Author(s):  
Tien-Heng Hsieh ◽  
Jean-Fu Kiang

Several versions of convolutional neural network (CNN) were developed to classify hyperspectral images (HSIs) of agricultural lands, including 1D-CNN with pixelwise spectral data, 1D-CNN with selected bands, 1D-CNN with spectral-spatial features and 2D-CNN with principal components. The HSI data of a crop agriculture in Salinas Valley and a mixed vegetation agriculture in Indian Pines were used to compare the performance of these CNN algorithms. The highest overall accuracy on these two cases are 99.8% and 98.1%, respectively, achieved by applying 1D-CNN with augmented input vectors, which contain both spectral and spatial features embedded in the HSI data.


2020 ◽  
Vol 12 (3) ◽  
pp. 582 ◽  
Author(s):  
Rui Li ◽  
Shunyi Zheng ◽  
Chenxi Duan ◽  
Yang Yang ◽  
Xiqi Wang

In recent years, researchers have paid increasing attention on hyperspectral image (HSI) classification using deep learning methods. To improve the accuracy and reduce the training samples, we propose a double-branch dual-attention mechanism network (DBDA) for HSI classification in this paper. Two branches are designed in DBDA to capture plenty of spectral and spatial features contained in HSI. Furthermore, a channel attention block and a spatial attention block are applied to these two branches respectively, which enables DBDA to refine and optimize the extracted feature maps. A series of experiments on four hyperspectral datasets show that the proposed framework has superior performance to the state-of-the-art algorithm, especially when the training samples are signally lacking.


Micromachines ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1271
Author(s):  
Hongmin Gao ◽  
Yiyan Zhang ◽  
Yunfei Zhang ◽  
Zhonghao Chen ◽  
Chenming Li ◽  
...  

In recent years, hyperspectral image classification (HSI) has attracted considerable attention. Various methods based on convolution neural networks have achieved outstanding classification results. However, most of them exited the defects of underutilization of spectral-spatial features, redundant information, and convergence difficulty. To address these problems, a novel 3D-2D multibranch feature fusion and dense attention network are proposed for HSI classification. Specifically, the 3D multibranch feature fusion module integrates multiple receptive fields in spatial and spectral dimensions to obtain shallow features. Then, a 2D densely connected attention module consists of densely connected layers and spatial-channel attention block. The former is used to alleviate the gradient vanishing and enhance the feature reuse during the training process. The latter emphasizes meaningful features and suppresses the interfering information along the two principal dimensions: channel and spatial axes. The experimental results on four benchmark hyperspectral images datasets demonstrate that the model can effectively improve the classification performance with great robustness.


2020 ◽  
Vol 12 (9) ◽  
pp. 1395
Author(s):  
Linlin Chen ◽  
Zhihui Wei ◽  
Yang Xu

Hyperspectral image (HSI) classification accuracy has been greatly improved by employing deep learning. The current research mainly focuses on how to build a deep network to improve the accuracy. However, these networks tend to be more complex and have more parameters, which makes the model difficult to train and easy to overfit. Therefore, we present a lightweight deep convolutional neural network (CNN) model called S2FEF-CNN. In this model, three S2FEF blocks are used for the joint spectral–spatial features extraction. Each S2FEF block uses 1D spectral convolution to extract spectral features and 2D spatial convolution to extract spatial features, respectively, and then fuses spectral and spatial features by multiplication. Instead of using the full connected layer, two pooling layers follow three blocks for dimension reduction, which further reduces the training parameters. We compared our method with some state-of-the-art HSI classification methods based on deep network on three commonly used hyperspectral datasets. The results show that our network can achieve a comparable classification accuracy with significantly reduced parameters compared to the above deep networks, which reflects its potential advantages in HSI classification.


2020 ◽  
Vol 12 (1) ◽  
pp. 125 ◽  
Author(s):  
Mu ◽  
Guo ◽  
Liu

Extracting spatial and spectral features through deep neural networks has become an effective means of classification of hyperspectral images. However, most networks rarely consider the extraction of multi-scale spatial features and cannot fully integrate spatial and spectral features. In order to solve these problems, this paper proposes a multi-scale and multi-level spectral-spatial feature fusion network (MSSN) for hyperspectral image classification. The network uses the original 3D cube as input data and does not need to use feature engineering. In the MSSN, using different scale neighborhood blocks as the input of the network, the spectral-spatial features of different scales can be effectively extracted. The proposed 3D–2D alternating residual block combines the spectral features extracted by the three-dimensional convolutional neural network (3D-CNN) with the spatial features extracted by the two-dimensional convolutional neural network (2D-CNN). It not only achieves the fusion of spectral features and spatial features but also achieves the fusion of high-level features and low-level features. Experimental results on four hyperspectral datasets show that this method is superior to several state-of-the-art classification methods for hyperspectral images.


2018 ◽  
Vol 10 (8) ◽  
pp. 1271 ◽  
Author(s):  
Feng Gao ◽  
Qun Wang ◽  
Junyu Dong ◽  
Qizhi Xu

Hyperspectral image classification has been acknowledged as the fundamental and challenging task of hyperspectral data processing. The abundance of spectral and spatial information has provided great opportunities to effectively characterize and identify ground materials. In this paper, we propose a spectral and spatial classification framework for hyperspectral images based on Random Multi-Graphs (RMGs). The RMG is a graph-based ensemble learning method, which is rarely considered in hyperspectral image classification. It is empirically verified that the semi-supervised RMG deals well with small sample setting problems. This kind of problem is very common in hyperspectral image applications. In the proposed method, spatial features are extracted based on linear prediction error analysis and local binary patterns; spatial features and spectral features are then stacked into high dimensional vectors. The high dimensional vectors are fed into the RMG for classification. By randomly selecting a subset of features to create a graph, the proposed method can achieve excellent classification performance. The experiments on three real hyperspectral datasets have demonstrated that the proposed method exhibits better performance than several closely related methods.


2021 ◽  
Vol 13 (18) ◽  
pp. 3561
Author(s):  
Ning Lv ◽  
Zhen Han ◽  
Chen Chen ◽  
Yijia Feng ◽  
Tao Su ◽  
...  

Hyperspectral image classification is essential for satellite Internet of Things (IoT) to build a large scale land-cover surveillance system. After acquiring real-time land-cover information, the edge of the network transmits all the hyperspectral images by satellites with low-latency and high-efficiency to the cloud computing center, which are provided by satellite IoT. A gigantic amount of remote sensing data bring challenges to the storage and processing capacity of traditional satellite systems. When hyperspectral images are used in annotation of land-cover application, data dimension reduction for classifier efficiency often leads to the decrease of classifier accuracy, especially the region to be annotated consists of natural landform and artificial structure. This paper proposes encoding spectral-spatial features for hyperspectral image classification in the satellite Internet of Things system to extract features effectively, namely attribute profile stacked autoencoder (AP-SAE). Firstly, extended morphology attribute profiles EMAP is used to obtain spatial features of different attribute scales. Secondly, AP-SAE is used to extract spectral features with similar spatial attributes. In this stage the program can learn feature mappings, on which the pixels from the same land-cover class are mapped as closely as possible and the pixels from different land-cover categories are separated by a large margin. Finally, the program trains an effective classifier by using the network of the AP-SAE. Experimental results on three widely-used hyperspectral image (HSI) datasets and comprehensive comparisons with existing methods demonstrate that our proposed method can be used effectively in hyperspectral image classification.


2021 ◽  
Vol 13 (18) ◽  
pp. 3590
Author(s):  
Tianyu Zhang ◽  
Cuiping Shi ◽  
Diling Liao ◽  
Liguo Wang

Convolutional neural networks (CNNs) have exhibited excellent performance in hyperspectral image classification. However, due to the lack of labeled hyperspectral data, it is difficult to achieve high classification accuracy of hyperspectral images with fewer training samples. In addition, although some deep learning techniques have been used in hyperspectral image classification, due to the abundant information of hyperspectral images, the problem of insufficient spatial spectral feature extraction still exists. To address the aforementioned issues, a spectral–spatial attention fusion with a deformable convolution residual network (SSAF-DCR) is proposed for hyperspectral image classification. The proposed network is composed of three parts, and each part is connected sequentially to extract features. In the first part, a dense spectral block is utilized to reuse spectral features as much as possible, and a spectral attention block that can refine and optimize the spectral features follows. In the second part, spatial features are extracted and selected by a dense spatial block and attention block, respectively. Then, the results of the first two parts are fused and sent to the third part, and deep spatial features are extracted by the DCR block. The above three parts realize the effective extraction of spectral–spatial features, and the experimental results for four commonly used hyperspectral datasets demonstrate that the proposed SSAF-DCR method is superior to some state-of-the-art methods with very few training samples.


2021 ◽  
Vol 13 (16) ◽  
pp. 3131
Author(s):  
Zhongwei Li ◽  
Xue Zhu ◽  
Ziqi Xin ◽  
Fangming Guo ◽  
Xingshuai Cui ◽  
...  

Variational Autoencoders (VAEs) and Generative Adversarial Networks (GANs) have been widely used in hyperspectral image classification (HSIC) tasks. However, the generated HSI virtual samples by VAEs are often ambiguous, and GANs are prone to the mode collapse, which lead the poor generalization abilities ultimately. Moreover, most of these models only consider the extraction of spectral or spatial features. They fail to combine the two branches interactively and ignore the correlation between them. Consequently, the variational generative adversarial network with crossed spatial and spectral interactions (CSSVGAN) was proposed in this paper, which includes a dual-branch variational Encoder to map spectral and spatial information to different latent spaces, a crossed interactive Generator to improve the quality of generated virtual samples, and a Discriminator stuck with a classifier to enhance the classification performance. Combining these three subnetworks, the proposed CSSVGAN achieves excellent classification by ensuring the diversity and interacting spectral and spatial features in a crossed manner. The superior experimental results on three datasets verify the effectiveness of this method.


Sign in / Sign up

Export Citation Format

Share Document