Convolutional Neural Network Based Indoor Microphone Array Sound Source Localization

2020 ◽  
Vol 57 (8) ◽  
pp. 081021
Author(s):  
焦琛 Jiao Chen ◽  
张涛 Zhang Tao ◽  
孙建红 Sun Jianhong
Author(s):  
Alif Bin Abdul Qayyum ◽  
K. M. Naimul Hassan ◽  
Adrita Anika ◽  
Md. Farhan Shadiq ◽  
Md Mushfiqur Rahman ◽  
...  

Abstract Drone-embedded sound source localization (SSL) has interesting application perspective in challenging search and rescue scenarios due to bad lighting conditions or occlusions. However, the problem gets complicated by severe drone ego-noise that may result in negative signal-to-noise ratios in the recorded microphone signals. In this paper, we present our work on drone-embedded SSL using recordings from an 8-channel cube-shaped microphone array embedded in an unmanned aerial vehicle (UAV). We use angular spectrum-based TDOA (time difference of arrival) estimation methods such as generalized cross-correlation phase-transform (GCC-PHAT), minimum-variance-distortion-less-response (MVDR) as baseline, which are state-of-the-art techniques for SSL. Though we improve the baseline method by reducing ego-noise using speed correlated harmonics cancellation (SCHC) technique, our main focus is to utilize deep learning techniques to solve this challenging problem. Here, we propose an end-to-end deep learning model, called DOANet, for SSL. DOANet is based on a one-dimensional dilated convolutional neural network that computes the azimuth and elevation angles of the target sound source from the raw audio signal. The advantage of using DOANet is that it does not require any hand-crafted audio features or ego-noise reduction for DOA estimation. We then evaluate the SSL performance using the proposed and baseline methods and find that the DOANet shows promising results compared to both the angular spectrum methods with and without SCHC. To evaluate the different methods, we also introduce a well-known parameter—area under the curve (AUC) of cumulative histogram plots of angular deviations—as a performance indicator which, to our knowledge, has not been used as a performance indicator for this sort of problem before.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8031
Author(s):  
Tan-Hsu Tan ◽  
Yu-Tang Lin ◽  
Yang-Lang Chang ◽  
Mohammad Alkhaleefah

In this research, a novel sound source localization model is introduced that integrates a convolutional neural network with a regression model (CNN-R) to estimate the sound source angle and distance based on the acoustic characteristics of the interaural phase difference (IPD). The IPD features of the sound signal are firstly extracted from time-frequency domain by short-time Fourier transform (STFT). Then, the IPD features map is fed to the CNN-R model as an image for sound source localization. The Pyroomacoustics platform and the multichannel impulse response database (MIRD) are used to generate both simulated and real room impulse response (RIR) datasets. The experimental results show that an average accuracy of 98.96% and 98.31% are achieved by the proposed CNN-R for angle and distance estimations in the simulation scenario at SNR = 30 dB and RT60 = 0.16 s, respectively. Moreover, in the real environment, the average accuracies of the angle and distance estimations are 99.85% and 99.38% at SNR = 30 dB and RT60 = 0.16 s, respectively. The performance obtained in both scenarios is superior to that of existing models, indicating the potential of the proposed CNN-R model for real-life applications.


2019 ◽  
Vol 60 (2) ◽  
pp. 545-557 ◽  
Author(s):  
Lin Zhou ◽  
Kangyu Ma ◽  
Lijie Wang ◽  
Ying Chen ◽  
Yibin Tang

2013 ◽  
Vol E96.D (10) ◽  
pp. 2257-2265 ◽  
Author(s):  
Hirofumi TSUZUKI ◽  
Mauricio KUGLER ◽  
Susumu KUROYANAGI ◽  
Akira IWATA

Sign in / Sign up

Export Citation Format

Share Document