scholarly journals A Fast and Lightweight Detection Network for Multi-Scale SAR Ship Detection under Complex Backgrounds

2021 ◽  
Vol 14 (1) ◽  
pp. 31
Author(s):  
Jimin Yu ◽  
Guangyu Zhou ◽  
Shangbo Zhou ◽  
Maowei Qin

It is very difficult to detect multi-scale synthetic aperture radar (SAR) ships, especially under complex backgrounds. Traditional constant false alarm rate methods are cumbersome in manual design and weak in migration capabilities. Based on deep learning, researchers have introduced methods that have shown good performance in order to get better detection results. However, the majority of these methods have a huge network structure and many parameters which greatly restrict the application and promotion. In this paper, a fast and lightweight detection network, namely FASC-Net, is proposed for multi-scale SAR ship detection under complex backgrounds. The proposed FASC-Net is mainly composed of ASIR-Block, Focus-Block, SPP-Block, and CAPE-Block. Specifically, without losing information, Focus-Block is placed at the forefront of FASC-Net for the first down-sampling of input SAR images at first. Then, ASIR-Block continues to down-sample the feature maps and use a small number of parameters for feature extraction. After that, the receptive field of the feature maps is increased by SPP-Block, and then CAPE-Block is used to perform feature fusion and predict targets of different scales on different feature maps. Based on this, a novel loss function is designed in the present paper in order to train the FASC-Net. The detection performance and generalization ability of FASC-Net have been demonstrated by a series of comparative experiments on the SSDD dataset, SAR-Ship-Dataset, and HRSID dataset, from which it is obvious that FASC-Net has outstanding detection performance on the three datasets and is superior to the existing excellent ship detection methods.

Author(s):  
Haomiao Liu ◽  
Haizhou Xu ◽  
Lei Zhang ◽  
Weigang Lu ◽  
Fei Yang ◽  
...  

Maritime ship monitoring plays an important role in maritime transportation. Fast and accurate detection of maritime ship is the key to maritime ship monitoring. The main sources of marine ship images are optical images and synthetic aperture radar (SAR) images. Different from natural images, SAR images are independent to daylight and weather conditions. Traditional ship detection methods of SAR images mainly depend on the statistical distribution of sea clutter, which leads to poor robustness. As a deep learning detector, RetinaNet can break this obstacle, and the problem of imbalance on feature level and objective level can be further solved by combining with Libra R-CNN algorithm. In this paper, we modify the feature fusion part of Libra RetinaNet by adding a bottom-up path augmentation structure to better preserve the low-level feature information, and we expand the dataset through style transfer. We evaluate our method on the publicly available SAR dataset of ship detection with complex backgrounds. The experimental results show that the improved Libra RetinaNet can effectively detect multi-scale ships through expansion of the dataset, with an average accuracy of 97.38%.


2021 ◽  
Vol 13 (2) ◽  
pp. 328
Author(s):  
Wenkai Liang ◽  
Yan Wu ◽  
Ming Li ◽  
Yice Cao ◽  
Xin Hu

The classification of high-resolution (HR) synthetic aperture radar (SAR) images is of great importance for SAR scene interpretation and application. However, the presence of intricate spatial structural patterns and complex statistical nature makes SAR image classification a challenging task, especially in the case of limited labeled SAR data. This paper proposes a novel HR SAR image classification method, using a multi-scale deep feature fusion network and covariance pooling manifold network (MFFN-CPMN). MFFN-CPMN combines the advantages of local spatial features and global statistical properties and considers the multi-feature information fusion of SAR images in representation learning. First, we propose a Gabor-filtering-based multi-scale feature fusion network (MFFN) to capture the spatial pattern and get the discriminative features of SAR images. The MFFN belongs to a deep convolutional neural network (CNN). To make full use of a large amount of unlabeled data, the weights of each layer of MFFN are optimized by unsupervised denoising dual-sparse encoder. Moreover, the feature fusion strategy in MFFN can effectively exploit the complementary information between different levels and different scales. Second, we utilize a covariance pooling manifold network to extract further the global second-order statistics of SAR images over the fusional feature maps. Finally, the obtained covariance descriptor is more distinct for various land covers. Experimental results on four HR SAR images demonstrate the effectiveness of the proposed method and achieve promising results over other related algorithms.


2021 ◽  
Vol 13 (18) ◽  
pp. 3650
Author(s):  
Ru Luo ◽  
Jin Xing ◽  
Lifu Chen ◽  
Zhouhao Pan ◽  
Xingmin Cai ◽  
...  

Although deep learning has achieved great success in aircraft detection from SAR imagery, its blackbox behavior has been criticized for low comprehensibility and interpretability. Such challenges have impeded the trustworthiness and wide application of deep learning techniques in SAR image analytics. In this paper, we propose an innovative eXplainable Artificial Intelligence (XAI) framework to glassbox deep neural networks (DNN) by using aircraft detection as a case study. This framework is composed of three parts: hybrid global attribution mapping (HGAM) for backbone network selection, path aggregation network (PANet), and class-specific confidence scores mapping (CCSM) for visualization of the detector. HGAM integrates the local and global XAI techniques to evaluate the effectiveness of DNN feature extraction; PANet provides advanced feature fusion to generate multi-scale prediction feature maps; while CCSM relies on visualization methods to examine the detection performance with given DNN and input SAR images. This framework can select the optimal backbone DNN for aircraft detection and map the detection performance for better understanding of the DNN. We verify its effectiveness with experiments using Gaofen-3 imagery. Our XAI framework offers an explainable approach to design, develop, and deploy DNN for SAR image analytics.


2019 ◽  
Vol 11 (5) ◽  
pp. 526 ◽  
Author(s):  
Nengyuan Liu ◽  
Zongjie Cao ◽  
Zongyong Cui ◽  
Yiming Pi ◽  
Sihang Dang

The classic ship detection methods in synthetic aperture radar (SAR) images suffer from an extreme variance of ship scale. Generating a set of ship proposals before detection operation can effectively alleviate the multi-scale problem. In order to construct a scale-independent proposal generator for SAR images, we suggest four characteristics of ships in SAR images and the corresponding four procedures in this paper. Based on these characteristics and procedures, we put forward a framework to explore multi-scale ship proposals. The designed framework mainly contains two stages: hierarchical grouping and proposal scoring. Firstly, we extract edges, superpixels and strong scattering components from SAR images. The ship proposals are obtained at hierarchical grouping stage by combining the strong scattering components with superpixel grouping. Considering the difference of edge density and the completeness and tightness of contour, we obtain the scores to measure the confidence that a proposal contains a ship. Finally, the ranking proposals are obtained. Extensive experiments demonstrate the effectiveness of the four procedures. Our method achieves 0.70 the average best overlap (ABO) score, 0.59 the area under the curve (AUC) score and 0.85 best recall on a challenging dataset. In addition, the recall of our method on three scale subsets are all above 0.80. Experimental results demonstrate that our algorithm outperforms the approaches previously used for SAR images.


2021 ◽  
Vol 13 (21) ◽  
pp. 4384
Author(s):  
Danpei Zhao ◽  
Chunbo Zhu ◽  
Jing Qi ◽  
Xinhu Qi ◽  
Zhenhua Su ◽  
...  

This paper takes account of the fact that there is a lack of consideration for imaging methods and target characteristics of synthetic aperture radar (SAR) images among existing instance segmentation methods designed for optical images. Thus, we propose a method for SAR ship instance segmentation based on the synergistic attention mechanism which not only improves the performance of ship detection with multi-task branches but also provides pixel-level contours for subsequent applications such as orientation or category determination. The proposed method—SA R-CNN—presents a synergistic attention strategy at the image, semantic, and target level with the following module corresponding to the different stages in the whole process of the instance segmentation framework. The global attention module (GAM), semantic attention module(SAM), and anchor attention module (AAM) were constructed for feature extraction, feature fusion, and target location, respectively, for multi-scale ship targets under complex background conditions. Compared with several state-of-the-art methods, our method reached 68.7 AP in detection and 56.5 AP in segmentation on the HRSID dataset, and showed 91.5 AP in the detection task on the SSDD dataset.


2021 ◽  
Vol 2021 ◽  
pp. 1-19
Author(s):  
Yao Chen ◽  
Tao Duan ◽  
Changyuan Wang ◽  
Yuanyuan Zhang ◽  
Mo Huang

Ship detection on synthetic aperture radar (SAR) imagery has many valuable applications for both civil and military fields and has received extraordinary attention in recent years. The traditional detection methods are insensitive to multiscale ships and usually time-consuming, results in low detection accuracy and limitation for real-time processing. To balance the accuracy and speed, an end-to-end ship detection method for complex inshore and offshore scenes based on deep convolutional neural networks (CNNs) is proposed in this paper. First, the SAR images are divided into different grids, and the anchor boxes are predefined based on the responsible grids for dense ship prediction. Then, Darknet-53 with residual units is adopted as a backbone to extract features, and a top-down pyramid structure is added for multiscale feature fusion with concatenation. By this means, abundant hierarchical features containing both spatial and semantic information are extracted. Meanwhile, the strategies such as soft non-maximum suppression (Soft-NMS), mix-up and mosaic data augmentation, multiscale training, and hybrid optimization are used for performance enhancement. Besides, the model is trained from scratch to avoid learning objective bias of pretraining. The proposed one-stage method adopts end-to-end inference by a single network, so the detection speed can be guaranteed due to the concise paradigm. Extensive experiments are performed on the public SAR ship detection dataset (SSDD), and the results show that the method can detect both inshore and offshore ships with higher accuracy than other mainstream methods, yielding the accuracy with an average of 95.52%, and the detection speed is quite fast with about 72 frames per second (FPS). The actual Sentinel-1 and Gaofen-3 data are utilized for verification, and the detection results also show the effectiveness and robustness of the method.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3293
Author(s):  
Yu-Huan Zhao ◽  
Peng Liu

In this paper, we present an adaptive ship detection method for single-look complex synthetic aperture radar (SAR) images. First, noncircularity is analyzed and adopted in ship detection task; besides, similarity variance weighted information entropy (SVWIE) is proposed for clutter reduction and target enhancement. According to the analysis of scattering of SVWIE and noncircularity, SVWIE-noncircularity (SN) decomposition is developed. Based on the decomposition, two components, the high-noncircularity SVWIE amplitude (h) and the low-noncircularity SVWIE amplitude (l), are obtained. We demonstrate that ships and clutter in SAR images are different for h detector and h detector can be effectively used for ship detection. Finally, to extract ships from the background, the generalized Gamma distribution (G Γ D) is used to fit h statistics of clutter and the constant false alarm rate (CFAR) is utilized to choose an adaptive threshold. The performance of the proposed method is demonstrated on HH polarization of Alos-2 images. Experimental results show that the proposed method can accurately detect ships in complex background, i.e., ships are close to small islands or with strong noise.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5693
Author(s):  
Yuhang Jiang ◽  
Wanwu Li ◽  
Lin Liu

In recent years, the rapid development of Deep Learning (DL) has provided a new method for ship detection in Synthetic Aperture Radar (SAR) images. However, there are still four challenges in this task. (1) The ship targets in SAR images are very sparse. A large number of unnecessary anchor boxes may be generated on the feature map when using traditional anchor-based detection models, which could greatly increase the amount of computation and make it difficult to achieve real-time rapid detection. (2) The size of the ship targets in SAR images is relatively small. Most of the detection methods have poor performance on small ships in large scenes. (3) The terrestrial background in SAR images is very complicated. Ship targets are susceptible to interference from complex backgrounds, and there are serious false detections and missed detections. (4) The ship targets in SAR images are characterized by a large aspect ratio, arbitrary direction and dense arrangement. Traditional horizontal box detection can cause non-target areas to interfere with the extraction of ship features, and it is difficult to accurately express the length, width and axial information of ship targets. To solve these problems, we propose an effective lightweight anchor-free detector called R-Centernet+ in the paper. Its features are as follows: the Convolutional Block Attention Module (CBAM) is introduced to the backbone network to improve the focusing ability on small ships; the Foreground Enhance Module (FEM) is used to introduce foreground information to reduce the interference of the complex background; the detection head that can output the ship angle map is designed to realize the rotation detection of ship targets. To verify the validity of the proposed model in this paper, experiments are performed on two public SAR image datasets, i.e., SAR Ship Detection Dataset (SSDD) and AIR-SARShip. The results show that the proposed R-Centernet+ detector can detect both inshore and offshore ships with higher accuracy than traditional models with an average precision of 95.11% on SSDD and 84.89% on AIR-SARShip, and the detection speed is quite fast with 33 frames per second.


2019 ◽  
Vol 11 (5) ◽  
pp. 594 ◽  
Author(s):  
Shuo Zhuang ◽  
Ping Wang ◽  
Boran Jiang ◽  
Gang Wang ◽  
Cong Wang

With the rapid advances in remote-sensing technologies and the larger number of satellite images, fast and effective object detection plays an important role in understanding and analyzing image information, which could be further applied to civilian and military fields. Recently object detection methods with region-based convolutional neural network have shown excellent performance. However, these two-stage methods contain region proposal generation and object detection procedures, resulting in low computation speed. Because of the expensive manual costs, the quantity of well-annotated aerial images is scarce, which also limits the progress of geospatial object detection in remote sensing. In this paper, on the one hand, we construct and release a large-scale remote-sensing dataset for geospatial object detection (RSD-GOD) that consists of 5 different categories with 18,187 annotated images and 40,990 instances. On the other hand, we design a single shot detection framework with multi-scale feature fusion. The feature maps from different layers are fused together through the up-sampling and concatenation blocks to predict the detection results. High-level features with semantic information and low-level features with fine details are fully explored for detection tasks, especially for small objects. Meanwhile, a soft non-maximum suppression strategy is put into practice to select the final detection results. Extensive experiments have been conducted on two datasets to evaluate the designed network. Results show that the proposed approach achieves a good detection performance and obtains the mean average precision value of 89.0% on a newly constructed RSD-GOD dataset and 83.8% on the Northwestern Polytechnical University very high spatial resolution-10 (NWPU VHR-10) dataset at 18 frames per second (FPS) on a NVIDIA GTX-1080Ti GPU.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8146
Author(s):  
Haozhen Zhu ◽  
Yao Xie ◽  
Huihui Huang ◽  
Chen Jing ◽  
Yingjiao Rong ◽  
...  

With the wide application of convolutional neural networks (CNNs), a variety of ship detection methods based on CNNs in synthetic aperture radar (SAR) images were proposed, but there are still two main challenges: (1) Ship detection requires high real-time performance, and a certain detection speed should be ensured while improving accuracy; (2) The diversity of ships in SAR images requires more powerful multi-scale detectors. To address these issues, a SAR ship detector called Duplicate Bilateral YOLO (DB-YOLO) is proposed in this paper, which is composed of a Feature Extraction Network (FEN), Duplicate Bilateral Feature Pyramid Network (DB-FPN) and Detection Network (DN). Firstly, a single-stage network is used to meet the need of real-time detection, and the cross stage partial (CSP) block is used to reduce the redundant parameters. Secondly, DB-FPN is designed to enhance the fusion of semantic and spatial information. In view of the ships in SAR image are mainly distributed with small-scale targets, the distribution of parameters and computation values between FEN and DB-FPN in different feature layers is redistributed to solve the multi-scale detection. Finally, the bounding boxes and confidence scores are given through the detection head of YOLO. In order to evaluate the effectiveness and robustness of DB-YOLO, comparative experiments with the other six state-of-the-art methods (Faster R-CNN, Cascade R-CNN, Libra R-CNN, FCOS, CenterNet and YOLOv5s) on two SAR ship datasets, i.e., SSDD and HRSID, are performed. The experimental results show that the AP50 of DB-YOLO reaches 97.8% on SSDD and 94.4% on HRSID, respectively. DB-YOLO meets the requirement of real-time detection (48.1 FPS) and is superior to other methods in the experiments.


Sign in / Sign up

Export Citation Format

Share Document