Deep learning-based method for multiple sound source localization with high resolution and accuracy

2021 ◽  
Vol 161 ◽  
pp. 107959
Author(s):  
Soo Young Lee ◽  
Jiho Chang ◽  
Seungchul Lee
2021 ◽  
Vol 263 (4) ◽  
pp. 2279-2283
Author(s):  
Soo Young Lee ◽  
Jiho Chang ◽  
Seungchul Lee

In this contribution, we present a high-resolution and accurate sound source localization via a deep learning framework. While the spherical microphone arrays can be utilized to produce omnidirectional beams, it is widely known that the conventional spherical harmonics beamforming (SHB) has a limit in terms of its spatial resolution. To accomplish the sound source localization with high resolution and preciseness, we propose a convolutional neural network (CNN)-based source localization model as a way of a data-driven approach. We first present a novel way to define the source distribution map that can spatially represent the single point source's position and strength. By utilizing paired dataset with spherical harmonics beamforming maps and our proposed high-resolution maps, we develop a fully convolutional neural network based on the encoder-decoder structure for establishing the image-to-image transformation model. Both quantitative and qualitative results are demonstrated to evaluate the powerfulness of the proposed data-driven source localization model.


Author(s):  
Alif Bin Abdul Qayyum ◽  
K. M. Naimul Hassan ◽  
Adrita Anika ◽  
Md. Farhan Shadiq ◽  
Md Mushfiqur Rahman ◽  
...  

Abstract Drone-embedded sound source localization (SSL) has interesting application perspective in challenging search and rescue scenarios due to bad lighting conditions or occlusions. However, the problem gets complicated by severe drone ego-noise that may result in negative signal-to-noise ratios in the recorded microphone signals. In this paper, we present our work on drone-embedded SSL using recordings from an 8-channel cube-shaped microphone array embedded in an unmanned aerial vehicle (UAV). We use angular spectrum-based TDOA (time difference of arrival) estimation methods such as generalized cross-correlation phase-transform (GCC-PHAT), minimum-variance-distortion-less-response (MVDR) as baseline, which are state-of-the-art techniques for SSL. Though we improve the baseline method by reducing ego-noise using speed correlated harmonics cancellation (SCHC) technique, our main focus is to utilize deep learning techniques to solve this challenging problem. Here, we propose an end-to-end deep learning model, called DOANet, for SSL. DOANet is based on a one-dimensional dilated convolutional neural network that computes the azimuth and elevation angles of the target sound source from the raw audio signal. The advantage of using DOANet is that it does not require any hand-crafted audio features or ego-noise reduction for DOA estimation. We then evaluate the SSL performance using the proposed and baseline methods and find that the DOANet shows promising results compared to both the angular spectrum methods with and without SCHC. To evaluate the different methods, we also introduce a well-known parameter—area under the curve (AUC) of cumulative histogram plots of angular deviations—as a performance indicator which, to our knowledge, has not been used as a performance indicator for this sort of problem before.


Sign in / Sign up

Export Citation Format

Share Document