Device Invariant Deep Neural Networks for Pulmonary Audio Event Detection Across Mobile and Wearable Devices

Author(s):  
Mohsin Y Ahmed ◽  
Li Zhu ◽  
Md Mahbubur Rahman ◽  
Tousif Ahmed ◽  
Jilong Kuang ◽  
...  
Technologies ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 64
Author(s):  
Rodrigo dos Santos ◽  
Ashwitha Kassetty ◽  
Shirin Nilizadeh

Audio event detection (AED) systems can leverage the power of specialized algorithms for detecting the presence of a specific sound of interest within audio captured from the environment. More recent approaches rely on deep learning algorithms, such as convolutional neural networks and convolutional recurrent neural networks. Given these conditions, it is important to assess how vulnerable these systems can be to attacks. As such, we develop AED-suited convolutional neural networks and convolutional recurrent neural networks, and attack them next with white noise disturbances, conceived to be simple and straightforward to be implemented and employed, even by non-tech savvy attackers. We develop this work under a safety-oriented scenario (AED systems for safety-related sounds, such as gunshots), and we show that an attacker can use such disturbances to avoid detection by up to 100 percent success. Prior work has shown that attackers can mislead image classification tasks; however, this work focuses on attacks against AED systems by tampering with their audio rather than image components. This work brings awareness to the designers and manufacturers of AED systems, as these solutions are vulnerable, yet may be trusted by individuals and families.


Author(s):  
Anurag Kumar ◽  
Ankit Shah ◽  
Alexander Hauptmann ◽  
Bhiksha Raj

In the last couple of years, weakly labeled learning has turned out to be an exciting approach for audio event detection. In this work, we introduce webly labeled learning for sound events which aims to remove human supervision altogether from the learning process. We first develop a method of obtaining labeled audio data from the web (albeit noisy), in which no manual labeling is involved. We then describe methods to efficiently learn from these webly labeled audio recordings. In our proposed system, WeblyNet, two deep neural networks co-teach each other to robustly learn from webly labeled data, leading to around 17% relative improvement over the baseline method. The method also involves transfer learning to obtain efficient representations.


2017 ◽  
Vol 18 (1) ◽  
pp. 183-190 ◽  
Author(s):  
Minkyu Lim ◽  
Donghyun Lee ◽  
Hosung Park ◽  
Ji-Hwan Kim

2015 ◽  
Vol 7 (4) ◽  
pp. 27-33
Author(s):  
Minkyu Lim ◽  
Donghyun Lee ◽  
Kwang-Ho Kim ◽  
Ji-Hwan Kim

PLoS ONE ◽  
2019 ◽  
Vol 14 (1) ◽  
pp. e0211466 ◽  
Author(s):  
Łukasz Kidziński ◽  
Scott Delp ◽  
Michael Schwartz

Sign in / Sign up

Export Citation Format

Share Document