Multi-Task Learning for Generalized Automatic Modulation Classification Under Non-Gaussian Noise with Varying SNR Conditions

Author(s):  
Yu Wang ◽  
Guan Gui ◽  
Tomoaki Ohtsuki ◽  
Fumiyuki Adachi
2020 ◽  
Author(s):  
Yu Wang ◽  
Guan Gui ◽  
Tomoaki Ohtsuki ◽  
Fumiyuki Adachi

Automatic modulation classification (AMC) is an critical step to identify signal modulation types so as to enable more accurate demodulation in the non-cooperative scenarios. Convolutional neural network (CNN)-based AMC is believed as one of the most promising methods with great classification accuracy. However, the conventional CNN-based methods are lack of generality capabilities under time-varying signal-to-noise ratio (SNR) conditions, because these methods are merely trained on specific datasets and can only work at the corresponding condition. In this paper, a novel CNN-based generalized AMC method is proposed, and a more realistic scenario is considered, including white non-Gaussian noise and synchronization error. Its generalization capability stems from the mixed datasets under varying noise scenarios, and the CNN can extract common features from these datasets. Simulation results show that our proposed architecture can achieve higher robustness and generalization than the conventional ones.


2020 ◽  
Author(s):  
Yu Wang ◽  
Guan Gui ◽  
Tomoaki Ohtsuki ◽  
Fumiyuki Adachi

Automatic modulation classification (AMC) is an critical step to identify signal modulation types so as to enable more accurate demodulation in the non-cooperative scenarios. Convolutional neural network (CNN)-based AMC is believed as one of the most promising methods with great classification accuracy. However, the conventional CNN-based methods are lack of generality capabilities under time-varying signal-to-noise ratio (SNR) conditions, because these methods are merely trained on specific datasets and can only work at the corresponding condition. In this paper, a novel CNN-based generalized AMC method is proposed, and a more realistic scenario is considered, including white non-Gaussian noise and synchronization error. Its generalization capability stems from the mixed datasets under varying noise scenarios, and the CNN can extract common features from these datasets. Simulation results show that our proposed architecture can achieve higher robustness and generalization than the conventional ones.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8252
Author(s):  
Zhan Ge ◽  
Hongyu Jiang ◽  
Youwei Guo ◽  
Jie Zhou

A feature-based automatic modulation classification (FB-AMC) algorithm has been widely investigated because of its better performance and lower complexity. In this study, a deep learning model was designed to analyze the classification performance of FB-AMC among the most commonly used features, including higher-order cumulants (HOC), features-based fuzzy c-means clustering (FCM), grid-like constellation diagram (GCD), cumulative distribution function (CDF), and raw IQ data. A novel end-to-end modulation classifier based on deep learning, named CCT classifier, which can automatically identify unknown modulation schemes from extracted features using a general architecture, was proposed. Features except GCD are first converted into two-dimensional representations. Then, each feature is fed into the CCT classifier for modulation classification. In addition, Gaussian channel, phase offset, frequency offset, non-Gaussian channel, and flat-fading channel are also introduced to compare the performance of different features. Additionally, transfer learning is introduced to reduce training time. Experimental results showed that the features HOC, raw IQ data, and GCD obtained better classification performance than CDF and FCM under Gaussian channel, while CDF and FCM were less sensitive to the given phase offset and frequency offset. Moreover, CDF was an effective feature for AMC under non-Gaussian and flat-fading channels, and the raw IQ data can be applied to different channels’ conditions. Finally, it showed that compared with the existing CNN and K-S classifiers, the proposed CCT classifier significantly improved the classification performance for MQAM at N = 512, reaching about 3.2% and 2.1% under Gaussian channel, respectively.


Sign in / Sign up

Export Citation Format

Share Document