scholarly journals Multiscale Weighted Adjacent Superpixel-Based Composite Kernel for Hyperspectral Image Classification

2021 ◽  
Vol 13 (4) ◽  
pp. 820
Author(s):  
Yaokang Zhang ◽  
Yunjie Chen

This paper presents a composite kernel method (MWASCK) based on multiscale weighted adjacent superpixels (ASs) to classify hyperspectral image (HSI). The MWASCK adequately exploits spatial-spectral features of weighted adjacent superpixels to guarantee that more accurate spectral features can be extracted. Firstly, we use a superpixel segmentation algorithm to divide HSI into multiple superpixels. Secondly, the similarities between each target superpixel and its ASs are calculated to construct the spatial features. Finally, a weighted AS-based composite kernel (WASCK) method for HSI classification is proposed. In order to avoid seeking for the optimal superpixel scale and fuse the multiscale spatial features, the MWASCK method uses multiscale weighted superpixel neighbor information. Experiments from two real HSIs indicate that superior performance of the WASCK and MWASCK methods compared with some popular classification methods.

2020 ◽  
Vol 12 (9) ◽  
pp. 1395
Author(s):  
Linlin Chen ◽  
Zhihui Wei ◽  
Yang Xu

Hyperspectral image (HSI) classification accuracy has been greatly improved by employing deep learning. The current research mainly focuses on how to build a deep network to improve the accuracy. However, these networks tend to be more complex and have more parameters, which makes the model difficult to train and easy to overfit. Therefore, we present a lightweight deep convolutional neural network (CNN) model called S2FEF-CNN. In this model, three S2FEF blocks are used for the joint spectral–spatial features extraction. Each S2FEF block uses 1D spectral convolution to extract spectral features and 2D spatial convolution to extract spatial features, respectively, and then fuses spectral and spatial features by multiplication. Instead of using the full connected layer, two pooling layers follow three blocks for dimension reduction, which further reduces the training parameters. We compared our method with some state-of-the-art HSI classification methods based on deep network on three commonly used hyperspectral datasets. The results show that our network can achieve a comparable classification accuracy with significantly reduced parameters compared to the above deep networks, which reflects its potential advantages in HSI classification.


2021 ◽  
Vol 13 (18) ◽  
pp. 3590
Author(s):  
Tianyu Zhang ◽  
Cuiping Shi ◽  
Diling Liao ◽  
Liguo Wang

Convolutional neural networks (CNNs) have exhibited excellent performance in hyperspectral image classification. However, due to the lack of labeled hyperspectral data, it is difficult to achieve high classification accuracy of hyperspectral images with fewer training samples. In addition, although some deep learning techniques have been used in hyperspectral image classification, due to the abundant information of hyperspectral images, the problem of insufficient spatial spectral feature extraction still exists. To address the aforementioned issues, a spectral–spatial attention fusion with a deformable convolution residual network (SSAF-DCR) is proposed for hyperspectral image classification. The proposed network is composed of three parts, and each part is connected sequentially to extract features. In the first part, a dense spectral block is utilized to reuse spectral features as much as possible, and a spectral attention block that can refine and optimize the spectral features follows. In the second part, spatial features are extracted and selected by a dense spatial block and attention block, respectively. Then, the results of the first two parts are fused and sent to the third part, and deep spatial features are extracted by the DCR block. The above three parts realize the effective extraction of spectral–spatial features, and the experimental results for four commonly used hyperspectral datasets demonstrate that the proposed SSAF-DCR method is superior to some state-of-the-art methods with very few training samples.


2019 ◽  
Vol 11 (16) ◽  
pp. 1954 ◽  
Author(s):  
Yangjie Sun ◽  
Zhongliang Fu ◽  
Liang Fan

Today, more and more deep learning frameworks are being applied to hyperspectral image classification tasks and have achieved great results. However, such approaches are still hampered by long training times. Traditional spectral–spatial hyperspectral image classification only utilizes spectral features at the pixel level, without considering the correlation between local spectral signatures. Our article has tested a novel hyperspectral image classification pattern, using random-patches convolution and local covariance (RPCC). The RPCC is an effective two-branch method that, on the one hand, obtains a specified number of convolution kernels from the image space through a random strategy and, on the other hand, constructs a covariance matrix between different spectral bands by clustering local neighboring pixels. In our method, the spatial features come from multi-scale and multi-level convolutional layers. The spectral features represent the correlations between different bands. We use the support vector machine as well as spectral and spatial fusion matrices to obtain classification results. Through experiments, RPCC is tested with five excellent methods on three public data-sets. Quantitative and qualitative evaluation indicators indicate that the accuracy of our RPCC method can match or exceed the current state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document