Interaural coherence induced ideal binary mask for binaural speech separation and dereverberation

Author(s):  
Yi-Ting Chen ◽  
Tzu-Hao Chen ◽  
Mao-Chang Huang ◽  
Tai-Shih Chi

The subjective quality test of the enhanced speech from different enhancement algorithms for listeners with normal hearing (NH) capability as well as listeners with hearing impairment (HI) is reported. The subjective quality evaluation of speech enhancement methods in the literature survey is mostly done targeting NH listeners and fewer attempts are observed to subjectively evaluate for HI listeners. The algorithms evaluated are from four different classes: spectral subtraction class(SS), statistical model based class (minimum mean square error), subspace class(PKLT) and auditory class (ideal binary mask using STFT, ideal binary mask using gammatone filterbank and ideal binary mask using gammachirp filterbank). The algorithms are evaluated using four types of real world noises recorded in Indian scenarios namely cafeteria, traffic, station and train at -5, 0, 5 and 10 dB SNRs. The evaluation is being done as per ITU-T P.835 standard in terms of three parametersspeech signal alone, background noise and overall quality. The noisy speech database developed in Indian regional language, Marathi, at four SNRs -5, 0, 5 and 10 dB is used for evaluation. Significant improvement is observed in ideal binary mask algorithm in terms of overall quality and signal distortion ratings for NH and HI listeners. The performance of minimum mean square error is also observed comparable with the ideal binary mask algorithm in some cases.


2020 ◽  
Vol 17 (4) ◽  
pp. 507-514
Author(s):  
Sidra Sajid ◽  
Ali Javed ◽  
Aun Irtaza

Speech and music segregation from a single channel is a challenging task due to background interference and intermingled signals of voice and music channels. It is of immense importance due to its utility in wide range of applications such as music information retrieval, singer identification, lyrics recognition and alignment. This paper presents an effective method for speech and music segregation. Considering the repeating nature of music, we first detect the local repeating structures in the signal using a locally defined window for each segment. After detecting the repeating structure, we extract them and perform separation using a soft time-frequency mask. We apply an ideal binary mask to enhance the speech and music intelligibility. We evaluated the proposed method on the mixtures set at -5 dB, 0 dB, 5 dB from Multimedia Information Retrieval-1000 clips (MIR-1K) dataset. Experimental results demonstrate that the proposed method for speech and music segregation outperforms the existing state-of-the-art methods in terms of Global-Normalized-Signal-to-Distortion Ratio (GNSDR) values


Sign in / Sign up

Export Citation Format

Share Document