Squeeze-and-Excitation Laplacian Pyramid Network with Dual-Polarization Feature Fusion for Ship Classification in SAR Images

Author(s):  
Tianwen Zhang ◽  
Xiaoling Zhang
Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 519
Author(s):  
Gaoyu Zhou ◽  
Gong Zhang ◽  
Biao Xue

High-resolution synthetic aperture radar (SAR) images are mostly used in the current field of ship classification, but in practical applications, moderate-resolution SAR images that can offer wider swath are more suitable for maritime surveillance. The ship targets in moderate-resolution SAR images occupy only a few pixels, and some of them show the shape of bright spots, which brings great difficulty for ship classification. To fully explore the deep-level feature representations of moderate-resolution SAR images and avoid the “dimension disaster”, we innovatively proposed a feature fusion framework based on the classification ability of individual features and the efficiency of overall information representation, called maximum-information-minimum-redundancy (MIMR). First, we applied the Filter method and Kernel Principal Component Analysis (KPCA) method to form two feature subsets representing the best classification ability and the highest information representation efficiency in linear space and nonlinear space. Second, the MIMR feature fusion method is adopted to assign different weights to feature vectors with different physical properties and discriminability. Comprehensive experiments on the open dataset OpenSARShip show that compared with traditional and emerging deep learning methods, the proposed method can effectively fuse non-redundant complementary feature subsets to improve the performance of ship classification in moderate-resolution SAR images.


2021 ◽  
Vol 13 (2) ◽  
pp. 328
Author(s):  
Wenkai Liang ◽  
Yan Wu ◽  
Ming Li ◽  
Yice Cao ◽  
Xin Hu

The classification of high-resolution (HR) synthetic aperture radar (SAR) images is of great importance for SAR scene interpretation and application. However, the presence of intricate spatial structural patterns and complex statistical nature makes SAR image classification a challenging task, especially in the case of limited labeled SAR data. This paper proposes a novel HR SAR image classification method, using a multi-scale deep feature fusion network and covariance pooling manifold network (MFFN-CPMN). MFFN-CPMN combines the advantages of local spatial features and global statistical properties and considers the multi-feature information fusion of SAR images in representation learning. First, we propose a Gabor-filtering-based multi-scale feature fusion network (MFFN) to capture the spatial pattern and get the discriminative features of SAR images. The MFFN belongs to a deep convolutional neural network (CNN). To make full use of a large amount of unlabeled data, the weights of each layer of MFFN are optimized by unsupervised denoising dual-sparse encoder. Moreover, the feature fusion strategy in MFFN can effectively exploit the complementary information between different levels and different scales. Second, we utilize a covariance pooling manifold network to extract further the global second-order statistics of SAR images over the fusional feature maps. Finally, the obtained covariance descriptor is more distinct for various land covers. Experimental results on four HR SAR images demonstrate the effectiveness of the proposed method and achieve promising results over other related algorithms.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2929 ◽  
Author(s):  
Yuanyuan Wang ◽  
Chao Wang ◽  
Hong Zhang

With the capability to automatically learn discriminative features, deep learning has experienced great success in natural images but has rarely been explored for ship classification in high-resolution SAR images due to the training bottleneck caused by the small datasets. In this paper, convolutional neural networks (CNNs) are applied to ship classification by using SAR images with the small datasets. First, ship chips are constructed from high-resolution SAR images and split into training and validation datasets. Second, a ship classification model is constructed based on very deep convolutional networks (VGG). Then, VGG is pretrained via ImageNet, and fine tuning is utilized to train our model. Six scenes of COSMO-SkyMed images are used to evaluate our proposed model with regard to the classification accuracy. The experimental results reveal that (1) our proposed ship classification model trained by fine tuning achieves more than 95% average classification accuracy, even with 5-cross validation; (2) compared with other models, the ship classification model based on VGG16 achieves at least 2% higher accuracies for classification. These experimental results reveal the effectiveness of our proposed method.


2020 ◽  
Vol 102 (sp1) ◽  
Author(s):  
Haifei Yu ◽  
Changying Wang ◽  
Yi Sui ◽  
Jinhua Li ◽  
Jialan Chu

2019 ◽  
Vol 37 (1) ◽  
pp. 125-135 ◽  
Author(s):  
Sizhe Huang ◽  
Huosheng Xu ◽  
Xuezhi Xia ◽  
Fan Yang ◽  
Fuhao Zou

Sign in / Sign up

Export Citation Format

Share Document