scholarly journals An Efficient Radio Frequency Interference (RFI) Recognition and Characterization Using End-to-End Transfer Learning

2020 ◽  
Vol 10 (19) ◽  
pp. 6885
Author(s):  
Sahar Ujan ◽  
Neda Navidi ◽  
Rene Jr Landry

Radio Frequency Interference (RFI) detection and characterization play a critical role in ensuring the security of all wireless communication networks. Advances in Machine Learning (ML) have led to the deployment of many robust techniques dealing with various types of RFI. To sidestep an unavoidable complicated feature extraction step in ML, we propose an efficient Deep Learning (DL)-based methodology using transfer learning to determine both the type of received signals and their modulation type. To this end, the scalogram of the received signals is used as the input of the pretrained convolutional neural networks (CNN), followed by a fully-connected classifier. This study considers a digital video stream as the signal of interest (SoI), transmitted in a real-time satellite-to-ground communication using DVB-S2 standards. To create the RFI dataset, the SoI is combined with three well-known jammers namely, continuous-wave interference (CWI), multi- continuous-wave interference (MCWI), and chirp interference (CI). This study investigated four well-known pretrained CNN architectures, namely, AlexNet, VGG-16, GoogleNet, and ResNet-18, for the feature extraction to recognize the visual RFI patterns directly from pixel images with minimal preprocessing. Moreover, the robustness of the proposed classifiers is evaluated by the data generated at different signal to noise ratios (SNR).

Author(s):  
Sahar Ujan ◽  
Neda Navidi ◽  
Rene Jr Landry

Radio Frequency Interference (RFI) detection and characterization play a critical role to in ensuring the security of all wireless communication networks. Advances in Machine Learning (ML) have led to the deployment of many robust techniques dealing with various types of RFI. To sidestep an unavoidable complicated feature extraction step in ML, this paper proposes an efficient end-to-end method using the latest advances in deep learning to extract the appropriate features of the RFI signal. Moreover, this study utilizes the benefits of transfer learning to determine both the type of received RFI signals and their modulation types. To this end, the scalogram of the received signals is used as the input of the pre-trained convolutional neural networks (CNN), followed by a fully-connected classifier. This study considers a digital video stream as the signal of interest (SoI), transmitted in a real-time satellite-to-ground communication using DVB-S2 standards. To create the RFI dataset, the SoI is combined with three well-known jammers namely, continuous-wave interference (CWI), multi- continuous-wave interference (MCWI), and chirp interference (CI). This study investigated four well-known pre-trained CNN architectures, namely, AlexNet, VGG-16, GoogleNet, and ResNet-18, for the feature extraction to recognize the visual RFI patterns directly from pixel images with minimal preprocessing. Moreover, the robustness of the proposed classifiers is evaluated by the data generated at different signal to noise ratios (SNR).


2020 ◽  
Vol 499 (1) ◽  
pp. 379-390
Author(s):  
Alireza Vafaei Sadr ◽  
Bruce A Bassett ◽  
Nadeem Oozeer ◽  
Yabebal Fantaye ◽  
Chris Finlay

ABSTRACT Flagging of Radio Frequency Interference (RFI) in time–frequency visibility data is an increasingly important challenge in radio astronomy. We present R-Net, a deep convolutional ResNet architecture that significantly outperforms existing algorithms – including the default MeerKAT RFI flagger, and deep U-Net architectures – across all metrics including AUC, F1-score, and MCC. We demonstrate the robustness of this improvement on both single dish and interferometric simulations and, using transfer learning, on real data. Our R-Net model’s precision is approximately $90{{\ \rm per\ cent}}$ better than the current MeerKAT flagger at $80{{\ \rm per\ cent}}$ recall and has a 35 per cent higher F1-score with no additional performance cost. We further highlight the effectiveness of transfer learning from a model initially trained on simulated MeerKAT data and fine-tuned on real, human-flagged, KAT-7 data. Despite the wide differences in the nature of the two telescope arrays, the model achieves an AUC of 0.91, while the best model without transfer learning only reaches an AUC of 0.67. We consider the use of phase information in our models but find that without calibration the phase adds almost no extra information relative to amplitude data only. Our results strongly suggest that deep learning on simulations, boosted by transfer learning on real data, will likely play a key role in the future of RFI flagging of radio astronomy data.


Sign in / Sign up

Export Citation Format

Share Document