scholarly journals Enhanced Automatic Speech Recognition System Based on Enhancing Power-Normalized Cepstral Coefficients

2019 ◽  
Vol 9 (10) ◽  
pp. 2166 ◽  
Author(s):  
Mohamed Tamazin ◽  
Ahmed Gouda ◽  
Mohamed Khedr

Many new consumer applications are based on the use of automatic speech recognition (ASR) systems, such as voice command interfaces, speech-to-text applications, and data entry processes. Although ASR systems have remarkably improved in recent decades, the speech recognition system performance still significantly degrades in the presence of noisy environments. Developing a robust ASR system that can work in real-world noise and other acoustic distorting conditions is an attractive research topic. Many advanced algorithms have been developed in the literature to deal with this problem; most of these algorithms are based on modeling the behavior of the human auditory system with perceived noisy speech. In this research, the power-normalized cepstral coefficient (PNCC) system is modified to increase robustness against the different types of environmental noises, where a new technique based on gammatone channel filtering combined with channel bias minimization is used to suppress the noise effects. The TIDIGITS database is utilized to evaluate the performance of the proposed system in comparison to the state-of-the-art techniques in the presence of additive white Gaussian noise (AWGN) and seven different types of environmental noises. In this research, one word is recognized from a set containing 11 possibilities only. The experimental results showed that the proposed method provides significant improvements in the recognition accuracy at low signal to noise ratios (SNR). In the case of subway noise at SNR = 5 dB, the proposed method outperforms the mel-frequency cepstral coefficient (MFCC) and relative spectral (RASTA)–perceptual linear predictive (PLP) methods by 55% and 47%, respectively. Moreover, the recognition rate of the proposed method is higher than the gammatone frequency cepstral coefficient (GFCC) and PNCC methods in the case of car noise. It is enhanced by 40% in comparison to the GFCC method at SNR 0dB, while it is improved by 20% in comparison to the PNCC method at SNR −5dB.

Author(s):  
M. Petroni ◽  
C. Collet ◽  
N. Fumai ◽  
K. Roger ◽  
C. Yien ◽  
...  

Abstract An automatic speech recognition system is being developed for a patient data management system (PDMS) for the pediatric intensive care unit (ICU) at the Montreal Children’s Hospital. Here, fourteen bedside monitors are linked by a local area network to a personal computer for real-time acquisition of vital sign data and the graphical display of trends. The PDMS also allows for the manual input of data, such as fluid balance data, by means of a keyboard and a pointing device. This paper presents a description of the multimodal human-computer interface of the bedside data entry system, focusing on the speech recognition and generation sub-systems and their integration in the OS/2 Presentation Manager environment.


Sign in / Sign up

Export Citation Format

Share Document