reverberant environment
Recently Published Documents


TOTAL DOCUMENTS

157
(FIVE YEARS 27)

H-INDEX

15
(FIVE YEARS 2)

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 592
Author(s):  
Deokgyu Yun ◽  
Seung Ho Choi

This paper proposes an audio data augmentation method based on deep learning in order to improve the performance of dereverberation. Conventionally, audio data are augmented using a room impulse response, which is artificially generated by some methods, such as the image method. The proposed method estimates a reverberation environment model based on a deep neural network that is trained by using clean and recorded audio data as inputs and outputs, respectively. Then, a large amount of a real augmented database is constructed by using the trained reverberation model, and the dereverberation model is trained with the augmented database. The performance of the augmentation model was verified by a log spectral distance and mean square error between the real augmented data and the recorded data. In addition, according to dereverberation experiments, the proposed method showed improved performance compared with the conventional method.


2021 ◽  
Author(s):  
Wilfried Gallian ◽  
Filippo Maria Fazi ◽  
Carlo Tripodi ◽  
Nicolo Strozzi ◽  
Alessandro Costalunga

2021 ◽  
Vol 263 (6) ◽  
pp. 348-354
Author(s):  
Bokai Du ◽  
Xiangyang Zeng ◽  
Haitao Wang

Multizone sound field reproduction aims to create different acoustical environments in regions without physical isolation. For a real reproduction system, it is always expected to improve system performance and reduce measurement effort. In this paper, a two-zone sound field reproduction is investigated with a proposed region control method. Conventional multipoint method only controls sound field at limited number of measurement points. However, the proposed method tries to control the sound field energy over the whole region. Considering the system's diverse work environment, different interpolation methods are applied in the proposed method. Simulations are conducted under free field and reverberation condition in order to deeply compare with conventional method and another harmonic domain method. Simulation results show that the proposed method achieves better performance than the conventional multipoint method in free field and reverberant environment. On the other hand, the region control method proposed in this paper is free from microphone array geometry requirement, which means the method is more convenient for the practical application.


2021 ◽  
Vol 11 (4) ◽  
pp. 1519
Author(s):  
Ying Xu ◽  
Saeed Afshar ◽  
Runchun Wang ◽  
Gregory Cohen ◽  
Chetan Singh Thakur ◽  
...  

We present a biologically inspired sound localisation system for reverberant environments using the Cascade of Asymmetric Resonators with Fast-Acting Compression (CAR-FAC) cochlear model. The system exploits a CAR-FAC pair to pre-process binaural signals that travel through the inherent delay line of the cascade structures, as each filter acts as a delay unit. Following the filtering, each cochlear channel is cross-correlated with all the channels of the other cochlea using a quantised instantaneous correlation function to form a 2-D instantaneous correlation matrix (correlogram). The correlogram contains both interaural time difference and spectral information. The generated correlograms are analysed using a regression neural network for localisation. We investigate the effect of the CAR-FAC nonlinearity on the system performance by comparing it with a CAR only version. To verify that the CAR/CAR-FAC and the quantised instantaneous correlation provide a suitable basis with which to perform sound localisation tasks, a linear regression, an extreme learning machine, and a convolutional neural network are trained to learn the azimuthal angle of the sound source from the correlogram. The system is evaluated using speech data recorded in a reverberant environment. We compare the performance of the linear CAR and nonlinear CAR-FAC models with current sound localisation systems as well as with human performance.


Sign in / Sign up

Export Citation Format

Share Document