Multi-Scale and Dense Ship Detection in SAR Images Based on Key-Point Estimation and Attention Mechanism

Author(s):  
Xiaorui Ma ◽  
Shilong Hou ◽  
Yangyang Wang ◽  
Jie Wang ◽  
Hongyu Wang
2015 ◽  
Vol 7 (6) ◽  
pp. 7695-7711 ◽  
Author(s):  
Xiaojing Huang ◽  
Wen Yang ◽  
Haijian Zhang ◽  
Gui-Song Xia

Author(s):  
Haomiao Liu ◽  
Haizhou Xu ◽  
Lei Zhang ◽  
Weigang Lu ◽  
Fei Yang ◽  
...  

Maritime ship monitoring plays an important role in maritime transportation. Fast and accurate detection of maritime ship is the key to maritime ship monitoring. The main sources of marine ship images are optical images and synthetic aperture radar (SAR) images. Different from natural images, SAR images are independent to daylight and weather conditions. Traditional ship detection methods of SAR images mainly depend on the statistical distribution of sea clutter, which leads to poor robustness. As a deep learning detector, RetinaNet can break this obstacle, and the problem of imbalance on feature level and objective level can be further solved by combining with Libra R-CNN algorithm. In this paper, we modify the feature fusion part of Libra RetinaNet by adding a bottom-up path augmentation structure to better preserve the low-level feature information, and we expand the dataset through style transfer. We evaluate our method on the publicly available SAR dataset of ship detection with complex backgrounds. The experimental results show that the improved Libra RetinaNet can effectively detect multi-scale ships through expansion of the dataset, with an average accuracy of 97.38%.


2019 ◽  
Vol 11 (5) ◽  
pp. 526 ◽  
Author(s):  
Nengyuan Liu ◽  
Zongjie Cao ◽  
Zongyong Cui ◽  
Yiming Pi ◽  
Sihang Dang

The classic ship detection methods in synthetic aperture radar (SAR) images suffer from an extreme variance of ship scale. Generating a set of ship proposals before detection operation can effectively alleviate the multi-scale problem. In order to construct a scale-independent proposal generator for SAR images, we suggest four characteristics of ships in SAR images and the corresponding four procedures in this paper. Based on these characteristics and procedures, we put forward a framework to explore multi-scale ship proposals. The designed framework mainly contains two stages: hierarchical grouping and proposal scoring. Firstly, we extract edges, superpixels and strong scattering components from SAR images. The ship proposals are obtained at hierarchical grouping stage by combining the strong scattering components with superpixel grouping. Considering the difference of edge density and the completeness and tightness of contour, we obtain the scores to measure the confidence that a proposal contains a ship. Finally, the ranking proposals are obtained. Extensive experiments demonstrate the effectiveness of the four procedures. Our method achieves 0.70 the average best overlap (ABO) score, 0.59 the area under the curve (AUC) score and 0.85 best recall on a challenging dataset. In addition, the recall of our method on three scale subsets are all above 0.80. Experimental results demonstrate that our algorithm outperforms the approaches previously used for SAR images.


2021 ◽  
Vol 13 (21) ◽  
pp. 4384
Author(s):  
Danpei Zhao ◽  
Chunbo Zhu ◽  
Jing Qi ◽  
Xinhu Qi ◽  
Zhenhua Su ◽  
...  

This paper takes account of the fact that there is a lack of consideration for imaging methods and target characteristics of synthetic aperture radar (SAR) images among existing instance segmentation methods designed for optical images. Thus, we propose a method for SAR ship instance segmentation based on the synergistic attention mechanism which not only improves the performance of ship detection with multi-task branches but also provides pixel-level contours for subsequent applications such as orientation or category determination. The proposed method—SA R-CNN—presents a synergistic attention strategy at the image, semantic, and target level with the following module corresponding to the different stages in the whole process of the instance segmentation framework. The global attention module (GAM), semantic attention module(SAM), and anchor attention module (AAM) were constructed for feature extraction, feature fusion, and target location, respectively, for multi-scale ship targets under complex background conditions. Compared with several state-of-the-art methods, our method reached 68.7 AP in detection and 56.5 AP in segmentation on the HRSID dataset, and showed 91.5 AP in the detection task on the SSDD dataset.


2021 ◽  
Vol 14 (1) ◽  
pp. 31
Author(s):  
Jimin Yu ◽  
Guangyu Zhou ◽  
Shangbo Zhou ◽  
Maowei Qin

It is very difficult to detect multi-scale synthetic aperture radar (SAR) ships, especially under complex backgrounds. Traditional constant false alarm rate methods are cumbersome in manual design and weak in migration capabilities. Based on deep learning, researchers have introduced methods that have shown good performance in order to get better detection results. However, the majority of these methods have a huge network structure and many parameters which greatly restrict the application and promotion. In this paper, a fast and lightweight detection network, namely FASC-Net, is proposed for multi-scale SAR ship detection under complex backgrounds. The proposed FASC-Net is mainly composed of ASIR-Block, Focus-Block, SPP-Block, and CAPE-Block. Specifically, without losing information, Focus-Block is placed at the forefront of FASC-Net for the first down-sampling of input SAR images at first. Then, ASIR-Block continues to down-sample the feature maps and use a small number of parameters for feature extraction. After that, the receptive field of the feature maps is increased by SPP-Block, and then CAPE-Block is used to perform feature fusion and predict targets of different scales on different feature maps. Based on this, a novel loss function is designed in the present paper in order to train the FASC-Net. The detection performance and generalization ability of FASC-Net have been demonstrated by a series of comparative experiments on the SSDD dataset, SAR-Ship-Dataset, and HRSID dataset, from which it is obvious that FASC-Net has outstanding detection performance on the three datasets and is superior to the existing excellent ship detection methods.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8146
Author(s):  
Haozhen Zhu ◽  
Yao Xie ◽  
Huihui Huang ◽  
Chen Jing ◽  
Yingjiao Rong ◽  
...  

With the wide application of convolutional neural networks (CNNs), a variety of ship detection methods based on CNNs in synthetic aperture radar (SAR) images were proposed, but there are still two main challenges: (1) Ship detection requires high real-time performance, and a certain detection speed should be ensured while improving accuracy; (2) The diversity of ships in SAR images requires more powerful multi-scale detectors. To address these issues, a SAR ship detector called Duplicate Bilateral YOLO (DB-YOLO) is proposed in this paper, which is composed of a Feature Extraction Network (FEN), Duplicate Bilateral Feature Pyramid Network (DB-FPN) and Detection Network (DN). Firstly, a single-stage network is used to meet the need of real-time detection, and the cross stage partial (CSP) block is used to reduce the redundant parameters. Secondly, DB-FPN is designed to enhance the fusion of semantic and spatial information. In view of the ships in SAR image are mainly distributed with small-scale targets, the distribution of parameters and computation values between FEN and DB-FPN in different feature layers is redistributed to solve the multi-scale detection. Finally, the bounding boxes and confidence scores are given through the detection head of YOLO. In order to evaluate the effectiveness and robustness of DB-YOLO, comparative experiments with the other six state-of-the-art methods (Faster R-CNN, Cascade R-CNN, Libra R-CNN, FCOS, CenterNet and YOLOv5s) on two SAR ship datasets, i.e., SSDD and HRSID, are performed. The experimental results show that the AP50 of DB-YOLO reaches 97.8% on SSDD and 94.4% on HRSID, respectively. DB-YOLO meets the requirement of real-time detection (48.1 FPS) and is superior to other methods in the experiments.


2019 ◽  
Vol 57 (11) ◽  
pp. 8983-8997 ◽  
Author(s):  
Zongyong Cui ◽  
Qi Li ◽  
Zongjie Cao ◽  
Nengyuan Liu

Sign in / Sign up

Export Citation Format

Share Document