Enabling an anechoic U-Net based speech separation model for online and offline applications in reverberant conditions

2021 ◽  
Vol 179 ◽  
pp. 108039
Author(s):  
Sania Gul ◽  
Muhammad Salman Khan ◽  
Ata ur rehman ◽  
Syed WaqarShah
2021 ◽  
Author(s):  
Sanyuan Chen ◽  
Yu Wu ◽  
Zhuo Chen ◽  
Jian Wu ◽  
Takuya Yoshioka ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Chun-Miao Yuan ◽  
Xue-Mei Sun ◽  
Hu Zhao

Speech information is the most important means of human communication, and it is crucial to separate the target voice from the mixed sound signals. This paper proposes a speech separation model based on convolutional neural networks and attention mechanism. The magnitude spectrum of the mixed speech signals, as the input, has its high dimensionality. By analyzing the characteristics of the convolutional neural network and attention mechanism, it can be found that the convolutional neural network can effectively extract low-dimensional features and mine the spatiotemporal structure information in the speech signals, and the attention mechanism can reduce the loss of sequence information. The accuracy of speech separation can be improved effectively by combining two mechanisms. Compared to the typical speech separation model DRNN-2 + discrim, this method achieves 0.27 dB GNSDR gain and 0.51 dB GSIR gain, which illustrates that the speech separation model proposed in this paper has achieved an ideal separation effect.


Author(s):  
Lu Yin ◽  
Ziteng Wang ◽  
Risheng Xia ◽  
Junfeng Li ◽  
Yonghong Yan
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document