computational auditory scene analysis
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 6)

H-INDEX

12
(FIVE YEARS 0)

2020 ◽  
Author(s):  
Aaron Nicolson ◽  
Kuldip K. Paliwal

The estimation of the clean speech short-time magnitude spectrum (MS) is key for speech enhancement and separation. Moreover, an automatic speech recognition (ASR) system that employs a front-end relies on clean speech MS estimation to remain robust. Training targets for deep learning approaches to clean speech MS estimation fall into three main categories: computational auditory scene analysis (CASA), MS, and minimum mean-square error (MMSE) training targets. In this study, we aim to determine which training target produces enhanced/separated speech at the highest quality and intelligibility, and which is most suitable as a front-end for robust ASR. The training targets were evaluated using a temporal convolutional network (TCN) on the DEMAND Voice Bank and Deep Xi datasets---which include real-world non-stationary and coloured noise sources at multiple SNR levels. Seven objective measures were used, including the word error rate (WER) of the Deep Speech ASR system. We find that MMSE training targets produce the highest objective quality scores. We also find that CASA training targets, in particular the ideal ratio mask (IRM), produce the highest intelligibility scores and perform best as a front-end for robust ASR.


2020 ◽  
Author(s):  
Aaron Nicolson ◽  
Kuldip K. Paliwal

The estimation of the clean speech short-time magnitude spectrum (MS) is key for speech enhancement and separation. Moreover, an automatic speech recognition (ASR) system that employs a front-end relies on clean speech MS estimation to remain robust. Training targets for deep learning approaches to clean speech MS estimation fall into three main categories: computational auditory scene analysis (CASA), MS, and minimum mean-square error (MMSE) training targets. In this study, we aim to determine which training target produces enhanced/separated speech at the highest quality and intelligibility, and which is most suitable as a front-end for robust ASR. The training targets were evaluated using a temporal convolutional network (TCN) on the DEMAND Voice Bank and Deep Xi datasets---which include real-world non-stationary and coloured noise sources at multiple SNR levels. Seven objective measures were used, including the word error rate (WER) of the Deep Speech ASR system. We find that MMSE training targets produce the highest objective quality scores. We also find that CASA training targets, in particular the ideal ratio mask (IRM), produce the highest intelligibility scores and perform best as a front-end for robust ASR.


Author(s):  
Hongyan Li ◽  
Yue Wang ◽  
Rongrong Zhao ◽  
Xueying Zhang

On the basis of the theory about blind separation of monaural speech based on computational auditory scene analysis (CASA), a two-talker speech separation system combining CASA and speaker recognition was proposed to separate speech from other speech interferences in this paper. First, a tandem algorithm is used to organize voiced speech, then based on the clustering of gammatone frequency cepstral coefficients (GFCCs), an object function is established to recognize the speaker, and the best group is achieved through exhaustive search or beam search, so that voiced speech is organized sequentially. Second, unvoiced segments are generated by estimating onset/offset, and then unvoiced–voiced (U–V) segments and unvoiced–unvoiced (U–U) segments are separated respectively. The U–V segments are managed via the binary mask of the separated voiced speech, while the U–V segments are separated evenly. So far the unvoiced segments are separated. The simulation and performance evaluation verify the feasibility and effectiveness of the proposed algorithm.


Sign in / Sign up

Export Citation Format

Share Document