Robust TDOA Estimation Based on Time-Frequency Masking and Deep Neural Networks

Author(s):  
Zhong-Qiu Wang ◽  
Xueliang Zhang ◽  
DeLiang Wang
2020 ◽  
Vol 10 (11) ◽  
pp. 2764-2767
Author(s):  
Chuanbin Ge ◽  
Di Liu ◽  
Juan Liu ◽  
Bingshuai Liu ◽  
Yi Xin

Arrhythmia is a group of conditions in which the heartbeat is irregular. There are many types of arrhythmia. Some can be life-threatening. Electrocardiogram (ECG) is an effective clinical tool used to diagnosis arrhythmia. Automatic recognition of different arrhythmia types in ECG signals has become an important and challenging issue. In this article, we proposed an algorithm to detect arrhythmia in 12-lead ECG signals and classify signals into 9 categories. Two 19-layer deep neural networks combining convolutional neural network and gated recurrent unit were proposed to realize this work. The first one was trained directly with the raw 12-lead ECG data while the other one was trained with an 18-"lead" ECG data, where the six extra leads containing morphology information in fractional time–frequency domain were generated utilizing fractional Fourier transform (FRFT). Overall detection results were obtained by fusing the output of these two networks and the final classification results on the testing dataset reports our proposed algorithm obtained a F1 score of 0.855. Furthermore, with our proposed algorithm, a better F1 score 0.81 was attained using training dataset provided by the China Physiological Signal Challenge held in 2018.


Passive acoustic target classification is an exceptionally challenging problem due to the complex phenomena associated with the channel and the relatively low Signal to Noise Ratio (SNR) manifested by the pervasive ambient noise field. Inspired by the overwhelming success of Deep Neural Networks (DNNs) in many such hard problems, a carefully crafted network specifically for target recognition application has been employed in this work. Although deep neural networks can learn characteristic features or representations directly from the raw observations, domain specific intermediate representations can mitigate the computational requirements as well as the sample complexity required to achieve an acceptable error rate in prediction. As the sonar target records are essentially a time series, spectro-temporal representations can make the intricate relationship between time and spectral components more explicit. In a passive sonar target recognition scenario, since most of the defining spectral components reside at the lower part of the spectrum, a nonlinear dilated spectral scale having an emphasis on low frequencies is highly desirable. This can be easily achieved using a filterbank based time-frequency decomposition, which allows more filters to be positioned at the desired frequency ranges of interest. In this work, a rigorous analysis of the performance of time-frequency representations initialized at various frequency scales, is conducted independently as well as in combination. A convolutional neural network based spectro-temporal feature learner has been utilized as the initial layers, while a deep stack of Long Short Term Memories (LSTMs) with residual connections has been used for learning the intricate temporal relationships hidden in the intermediate representations. From the experimental results it can be observed that a linear scale spectrogram achieves an accuracy of 92.4% and 90.2% respectively for validation and test sets in the single feature configuration, whereas the gammatone spectrogram is capable of attaining an accuracy in the order of 96.7% and 96.1% respectively for the same. In a multifeatured setup however, the accuracy reaches up to 97.3% and 96.6% respectively, which reveals that a combination of properly initialized intermediate representations can improve the classification performance significantly.


2021 ◽  
Vol 40 (1) ◽  
pp. 849-864
Author(s):  
Nasir Saleem ◽  
Muhammad Irfan Khattak ◽  
Mu’ath Al-Hasan ◽  
Atif Jan

Speech enhancement is a very important problem in various speech processing applications. Recently, supervised speech enhancement using deep learning approaches to estimate a time-frequency mask have proved remarkable performance gain. In this paper, we have proposed time-frequency masking-based supervised speech enhancement method for improving intelligibility and quality of the noisy speech. We believe that a large performance gain can be achieved if deep neural networks (DNNs) are layer-wise pre-trained by stacking Gaussian-Bernoulli Restricted Boltzmann Machine (GB-RBM). The proposed DNN is called as Gaussian-Bernoulli Deep Belief Network (GB-DBN) and are optimized by minimizing errors between the estimated and pre-defined masks. Non-linear Mel-Scale weighted mean square error (LMW-MSE) loss function is used as training criterion. We have examined the performance of the proposed pre-training scheme using different DNNs which are established on three time-frequency masks comprised of the ideal amplitude mask (IAM), ideal ratio mask (IRM), and phase sensitive mask (PSM). The results in different noisy conditions demonstrated that when DNNs are pre-trained by the proposed scheme provided a persistent performance gain in terms of the perceived speech intelligibility and quality. Also, the proposed pre-training scheme is effective and robust in noisy training data.


Sign in / Sign up

Export Citation Format

Share Document