scholarly journals Forward-Backward Convolutional Recurrent Neural Networks and Tag-Conditioned Convolutional Neural Networks for \\Weakly Labeled Semi-supervised Sound Event Detection

2021 ◽  
Author(s):  
Janek Ebbers ◽  
Reinhold Haeb-Umbach

In this paper we present our system for thedetection and classi-fication of acoustic scenes and events (DCASE) 2020 ChallengeTask 4: Sound event detection and separation in domestic envi-ronments. We introduce two new models: the forward-backwardconvolutional recurrent neural network (FBCRNN) and the tag-conditioned convolutional neural network (CNN). The FBCRNNemploys two recurrent neural network (RNN) classifiers sharing thesame CNN for preprocessing. With one RNN processing a record-ing in forward direction and the other in backward direction, thetwo networks are trained to jointly predict audio tags, i.e., weak la-bels, at each time step within a recording, given that at each timestep they have jointly processed the whole recording. The pro-posed training encourages the classifiers to tag events as soon aspossible. Therefore, after training, the networks can be appliedto shorter audio segments of, e.g.,200 ms, allowing sound eventdetection (SED). Further, we propose a tag-conditioned CNN tocomplement SED. It is trained to predict strong labels while using(predicted) tags, i.e., weak labels, as additional input. For train-ing pseudo strong labels from a FBCRNN ensemble are used. Thepresented system scored the fourth and third place in the systemsand teams rankings, respectively. Subsequent improvements allowour system to even outperform the challenge baseline and winnersystems in average by, respectively,18.0 %and2.2 %event-basedF1-score on the validation set. Source code is publicly available athttps://github.com/fgnt/pb_sed

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 147337-147348
Author(s):  
Keming Zhang ◽  
Yuanwen Cai ◽  
Yuan Ren ◽  
Ruida Ye ◽  
Liang He

2020 ◽  
Vol 10 (14) ◽  
pp. 4911
Author(s):  
Jin-Yeol Kwak ◽  
Yong-Joo Chung

We propose using derivative features for sound event detection based on deep neural networks. As input to the networks, we used log-mel-filterbank and its first and second derivative features for each frame of the audio signal. Two deep neural networks were used to evaluate the effectiveness of these derivative features. Specifically, a convolutional recurrent neural network (CRNN) was constructed by combining a convolutional neural network and a recurrent neural networks (RNN) followed by a feed-forward neural network (FNN) acting as a classification layer. In addition, a mean-teacher model based on an attention CRNN was used. Both models had an average pooling layer at the output so that weakly labeled and unlabeled audio data may be used during model training. Under the various training conditions, depending on the neural network architecture and training set, the use of derivative features resulted in a consistent performance improvement by using the derivative features. Experiments on audio data from the Detection and Classification of Acoustic Scenes and Events 2018 and 2019 challenges indicated that a maximum relative improvement of 16.9% was obtained in terms of the F-score.


Author(s):  
Uday Singh ◽  
Dibya Debayan Dash ◽  
Manas Sharma ◽  
Sarthak Mishra ◽  
S. Malarvizhi ◽  
...  

2021 ◽  
Vol 1769 (1) ◽  
pp. 012008
Author(s):  
Keming Zhang ◽  
Yuanwen Cai ◽  
Yuan Ren ◽  
Ruida Ye ◽  
Xianwei Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document