Measuring information provided by language model and acoustic model in probabilistic speech recognition: Theory and experimental results

1990 ◽  
Vol 9 (5-6) ◽  
pp. 531-539 ◽  
Author(s):  
Marco Ferretti ◽  
Giulio Maltese ◽  
Stefano Scarci
2021 ◽  
Vol 336 ◽  
pp. 06016
Author(s):  
Taiben Suan ◽  
Rangzhuoma Cai ◽  
Zhijie Cai ◽  
Ba Zu ◽  
Baojia Gong

We built a language model which is based on Transformer network architecture, used attention mechanisms to dispensing with recurrence and convalutions entirely. Through the transliteration of Tibetan to International Phonetic Alphabets, the language model was trained using the syllables and phonemes of the Tibetan word as modeling units to predict corresponding Tibetan sentences according to the context semantics of IPA. And it combined with the acoustic model as the Tibetan speech recognition was compared with end-to-end Tibetan speech recognition.


Author(s):  
Vincent Elbert Budiman ◽  
Andreas Widjaja

Here a development of an Acoustic and Language Model is presented. Low Word Error Rate is an early good sign of a good Language and Acoustic Model. Although there are still parameters other than Words Error Rate, our work focused on building Bahasa Indonesia with approximately 2000 common words and achieved the minimum threshold of 25% Word Error Rate. There were several experiments consist of different cases, training data, and testing data with Word Error Rate and Testing Ratio as the main comparison. The language and acoustic model were built using Sphinx4 from Carnegie Mellon University using Hidden Markov Model for the acoustic model and ARPA Model for the language model. The models configurations, which are Beam Width and Force Alignment, directly correlates with Word Error Rate. The configurations were set to 1e-80 for Beam Width and 1e-60 for Force Alignment to prevent underfitting or overfitting of the acoustic model. The goals of this research are to build continuous speech recognition in Bahasa Indonesia which has low Word Error Rate and to determine the optimum numbers of training and testing data which minimize the Word Error Rate.  


2012 ◽  
Vol 239-240 ◽  
pp. 1100-1103 ◽  
Author(s):  
Jing Yun ◽  
Zhi Qiang Ma ◽  
Yi La Su ◽  
Xiu Lan Xie

Triphone DDBHMM (Duration Distribution Based HMM) is presented as the acoustic model for Mongolian continuous speech recognition, and the Mongolian Acoustic Model is optimized by state-binding. The experiment made a comparison of the triphone DDBHMM, diphone DDBHMM, triphone HMM on HTK platform and analyzed their effects on the accuracy of acoustic layer. The experimental results have showed that Triphone DDBHMM significantly improves the recognition performance of continuous speech recognition in Mongolian.


Author(s):  
Zhong Meng ◽  
Sarangarajan Parthasarathy ◽  
Eric Sun ◽  
Yashesh Gaur ◽  
Naoyuki Kanda ◽  
...  

2021 ◽  
Vol 11 (6) ◽  
pp. 2866
Author(s):  
Damheo Lee ◽  
Donghyun Kim ◽  
Seung Yun ◽  
Sanghun Kim

In this paper, we propose a new method for code-switching (CS) automatic speech recognition (ASR) in Korean. First, the phonetic variations in English pronunciation spoken by Korean speakers should be considered. Thus, we tried to find a unified pronunciation model based on phonetic knowledge and deep learning. Second, we extracted the CS sentences semantically similar to the target domain and then applied the language model (LM) adaptation to solve the biased modeling toward Korean due to the imbalanced training data. In this experiment, training data were AI Hub (1033 h) in Korean and Librispeech (960 h) in English. As a result, when compared to the baseline, the proposed method improved the error reduction rate (ERR) by up to 11.6% with phonetic variant modeling and by 17.3% when semantically similar sentences were applied to the LM adaptation. If we considered only English words, the word correction rate improved up to 24.2% compared to that of the baseline. The proposed method seems to be very effective in CS speech recognition.


Sign in / Sign up

Export Citation Format

Share Document