Weakly-supervised audio event detection using event-specific Gaussian filters and fully convolutional networks

Author(s):  
Ting-Wei Su ◽  
Jen-Yu Liu ◽  
Yi-Hsuan Yang
Author(s):  
Shao-Yen Tseng ◽  
Juncheng Li ◽  
Yun Wang ◽  
Florian Metze ◽  
Joseph Szurley ◽  
...  

Author(s):  
Nicholus Mboga ◽  
Stefanos Georganos ◽  
Tais Grippa ◽  
Moritz Lennert ◽  
Sabine Vanhuysse ◽  
...  

2019 ◽  
Vol 9 (11) ◽  
pp. 2302 ◽  
Author(s):  
Inkyu Choi ◽  
Soo Hyun Bae ◽  
Nam Soo Kim

Audio event detection (AED) is a task of recognizing the types of audio events in an audio stream and estimating their temporal positions. AED is typically based on fully supervised approaches, requiring strong labels including both the presence and temporal position of each audio event. However, fully supervised datasets are not easily available due to the heavy cost of human annotation. Recently, weakly supervised approaches for AED have been proposed, utilizing large scale datasets with weak labels including only the occurrence of events in recordings. In this work, we introduce a deep convolutional neural network (CNN) model called DSNet based on densely connected convolution networks (DenseNets) and squeeze-and-excitation networks (SENets) for weakly supervised training of AED. DSNet alleviates the vanishing-gradient problem and strengthens feature propagation and models interdependencies between channels. We also propose a structured prediction method for weakly supervised AED. We apply a recurrent neural network (RNN) based framework and a prediction smoothness cost function to consider long-term contextual information with reduced error propagation. In post-processing, conditional random fields (CRFs) are applied to take into account the dependency between segments and delineate the borders of audio events precisely. We evaluated our proposed models on the DCASE 2017 task 4 dataset and obtained state-of-the-art results on both audio tagging and event detection tasks.


2019 ◽  
Vol 3 (2) ◽  
pp. 31 ◽  
Author(s):  
Quanchun Jiang ◽  
Olamide Timothy Tawose ◽  
Songwen Pei ◽  
Xiaodong Chen ◽  
Linhua Jiang ◽  
...  

In this paper, we propose a semantic segmentation method based on superpixel region merging and convolutional neural network (CNN), referred to as regional merging neural network (RMNN). Image annotation has always been an important role in weakly-supervised semantic segmentation. Most methods use manual labeling. In this paper, super-pixels with similar features are combined using the relationship between each pixel after super-pixel segmentation to form a plurality of super-pixel blocks. Rough predictions are generated by the fully convolutional networks (FCN) so that certain super-pixel blocks will be labeled. We perceive and find other positive areas in an iterative way through the marked areas. This reduces the feature extraction vector and reduces the data dimension due to super-pixels. The algorithm not only uses superpixel merging to narrow down the target’s range but also compensates for the lack of weakly-supervised semantic segmentation at the pixel level. In the training of the network, we use the method of region merging to improve the accuracy of contour recognition. Our extensive experiments demonstrated the effectiveness of the proposed method with the PASCAL VOC 2012 dataset. In particular, evaluation results show that the mean intersection over union (mIoU) score of our method reaches as high as 44.6%. Because the cavity convolution is in the pooled downsampling operation, it does not degrade the network’s receptive field, thereby ensuring the accuracy of image semantic segmentation. The findings of this work thus open the door to leveraging the dilated convolution to improve the recognition accuracy of small objects.


2021 ◽  
Vol 421 ◽  
pp. 51-65
Author(s):  
Yingbin Wang ◽  
Guanghui Zhao ◽  
Kai Xiong ◽  
Guangming Shi ◽  
Yumeng Zhang

2018 ◽  
Author(s):  
Pankaj Joshi ◽  
Digvijaysingh Gautam ◽  
Ganesh Ramakrishnan ◽  
Preethi Jyothi

Sign in / Sign up

Export Citation Format

Share Document