Impact Analysis of Radio Frequency Interference on SAR Image Ship Detection Based on Deep Learning

Author(s):  
Puyang Shao ◽  
Xiaoqi Lu ◽  
Pingping Huang ◽  
Wei Xu ◽  
Yifan Dong
2021 ◽  
Vol 13 (10) ◽  
pp. 1909
Author(s):  
Jiahuan Jiang ◽  
Xiongjun Fu ◽  
Rui Qin ◽  
Xiaoyan Wang ◽  
Zhifeng Ma

Synthetic Aperture Radar (SAR) has become one of the important technical means of marine monitoring in the field of remote sensing due to its all-day, all-weather advantage. National territorial waters to achieve ship monitoring is conducive to national maritime law enforcement, implementation of maritime traffic control, and maintenance of national maritime security, so ship detection has been a hot spot and focus of research. After the development from traditional detection methods to deep learning combined methods, most of the research always based on the evolving Graphics Processing Unit (GPU) computing power to propose more complex and computationally intensive strategies, while in the process of transplanting optical image detection ignored the low signal-to-noise ratio, low resolution, single-channel and other characteristics brought by the SAR image imaging principle. Constantly pursuing detection accuracy while ignoring the detection speed and the ultimate application of the algorithm, almost all algorithms rely on powerful clustered desktop GPUs, which cannot be implemented on the frontline of marine monitoring to cope with the changing realities. To address these issues, this paper proposes a multi-channel fusion SAR image processing method that makes full use of image information and the network’s ability to extract features; it is also based on the latest You Only Look Once version 4 (YOLO-V4) deep learning framework for modeling architecture and training models. The YOLO-V4-light network was tailored for real-time and implementation, significantly reducing the model size, detection time, number of computational parameters, and memory consumption, and refining the network for three-channel images to compensate for the loss of accuracy due to light-weighting. The test experiments were completed entirely on a portable computer and achieved an Average Precision (AP) of 90.37% on the SAR Ship Detection Dataset (SSDD), simplifying the model while ensuring a lead over most existing methods. The YOLO-V4-lightship detection algorithm proposed in this paper has great practical application in maritime safety monitoring and emergency rescue.


2020 ◽  
Vol 499 (1) ◽  
pp. 379-390
Author(s):  
Alireza Vafaei Sadr ◽  
Bruce A Bassett ◽  
Nadeem Oozeer ◽  
Yabebal Fantaye ◽  
Chris Finlay

ABSTRACT Flagging of Radio Frequency Interference (RFI) in time–frequency visibility data is an increasingly important challenge in radio astronomy. We present R-Net, a deep convolutional ResNet architecture that significantly outperforms existing algorithms – including the default MeerKAT RFI flagger, and deep U-Net architectures – across all metrics including AUC, F1-score, and MCC. We demonstrate the robustness of this improvement on both single dish and interferometric simulations and, using transfer learning, on real data. Our R-Net model’s precision is approximately $90{{\ \rm per\ cent}}$ better than the current MeerKAT flagger at $80{{\ \rm per\ cent}}$ recall and has a 35 per cent higher F1-score with no additional performance cost. We further highlight the effectiveness of transfer learning from a model initially trained on simulated MeerKAT data and fine-tuned on real, human-flagged, KAT-7 data. Despite the wide differences in the nature of the two telescope arrays, the model achieves an AUC of 0.91, while the best model without transfer learning only reaches an AUC of 0.67. We consider the use of phase information in our models but find that without calibration the phase adds almost no extra information relative to amplitude data only. Our results strongly suggest that deep learning on simulations, boosted by transfer learning on real data, will likely play a key role in the future of RFI flagging of radio astronomy data.


2020 ◽  
Vol 497 (2) ◽  
pp. 1661-1674 ◽  
Author(s):  
Devansh Agarwal ◽  
Kshitij Aggarwal ◽  
Sarah Burke-Spolaor ◽  
Duncan R Lorimer ◽  
Nathaniel Garver-Daniels

ABSTRACT With the upcoming commensal surveys for Fast Radio Bursts (FRBs), and their high candidate rate, usage of machine learning algorithms for candidate classification is a necessity. Such algorithms will also play a pivotal role in sending real-time triggers for prompt follow-ups with other instruments. In this paper, we have used the technique of Transfer Learning to train the state-of-the-art deep neural networks for classification of FRB and Radio Frequency Interference (RFI) candidates. These are convolutional neural networks which work on radio frequency-time and dispersion measure-time images as the inputs. We trained these networks using simulated FRBs and real RFI candidates from telescopes at the Green Bank Observatory. We present 11 deep learning models, each with an accuracy and recall above 99.5 per cent on our test data set comprising of real RFI and pulsar candidates. As we demonstrate, these algorithms are telescope and frequency agnostic and are able to detect all FRBs with signal-to-noise ratios above 10 in ASKAP and Parkes data. We also provide an open-source python package fetch (Fast Extragalactic Transient Candidate Hunter) for classification of candidates, using our models. Using fetch, these models can be deployed along with any commensal search pipeline for real-time candidate classification.


Sign in / Sign up

Export Citation Format

Share Document