Single Channel Speech Enhancement for Mixed Non-stationary Noise Environments

Author(s):  
Sachin Singh ◽  
Manoj Tripathy ◽  
R. S. Anand
Author(s):  
Maximilian Strake ◽  
Bruno Defraene ◽  
Kristoff Fluyt ◽  
Wouter Tirry ◽  
Tim Fingscheidt

AbstractSingle-channel speech enhancement in highly non-stationary noise conditions is a very challenging task, especially when interfering speech is included in the noise. Deep learning-based approaches have notably improved the performance of speech enhancement algorithms under such conditions, but still introduce speech distortions if strong noise suppression shall be achieved. We propose to address this problem by using a two-stage approach, first performing noise suppression and subsequently restoring natural sounding speech, using specifically chosen neural network topologies and loss functions for each task. A mask-based long short-term memory (LSTM) network is employed for noise suppression and speech restoration is performed via spectral mapping with a convolutional encoder-decoder network (CED). The proposed method improves speech quality (PESQ) over state-of-the-art single-stage methods by about 0.1 points for unseen highly non-stationary noise types including interfering speech. Furthermore, it is able to increase intelligibility in low-SNR conditions and consistently outperforms all reference methods.


Sign in / Sign up

Export Citation Format

Share Document