Acoustic Modeling with Deep Belief Networks for Russian Speech Recognition

Author(s):  
Mikhail Zulkarneev ◽  
Ruben Grigoryan ◽  
Nikolay Shamraev
2012 ◽  
Vol 20 (1) ◽  
pp. 14-22 ◽  
Author(s):  
Abdel-rahman Mohamed ◽  
George E. Dahl ◽  
Geoffrey Hinton

Author(s):  
Mahboubeh Farahat ◽  
Ramin Halavati

Most current speech recognition systems use Hidden Markov Models (HMMs) to deal with the temporal variability of speech and Gaussian mixture models (GMMs) to determine how well each state of each HMM fits a frame or a short window of frames of coefficients that represents the acoustic input. In these systems acoustic inputs are represented by Mel Frequency Cepstral Coefficients temporal spectrogram known as frames. But MFCC is not robust to noise. Consequently, with different train and test conditions the accuracy of speech recognition systems decreases. On the other hand, using MFCCs of larger window of frames in GMMs needs more computational power. In this paper, Deep Belief Networks (DBNs) are used to extract discriminative information from larger window of frames. Nonlinear transformations lead to high-order and low-dimensional features which are robust to variation of input speech. Multiple speaker isolated word recognition tasks with 100 and 200 words in clean and noisy environments has been used to test this method. The experimental results indicate that this new method of feature encoding result in much better word recognition accuracy.


Author(s):  
C.-H. Lee ◽  
E. Giachin ◽  
L. R. Rabiner ◽  
R. Pieraccini ◽  
A. E. Rosenberg

Sign in / Sign up

Export Citation Format

Share Document