Large vocabulary Russian speech recognition using syntactico-statistical language modeling

2014 ◽  
Vol 56 ◽  
pp. 213-228 ◽  
Author(s):  
Alexey Karpov ◽  
Konstantin Markov ◽  
Irina Kipyatkova ◽  
Daria Vazhenina ◽  
Andrey Ronzhin
10.5772/6380 ◽  
2008 ◽  
Author(s):  
Ebru Arsoy ◽  
Mikko Kurimo ◽  
Murat Saralar ◽  
Teemu Hirsimki ◽  
Janne Pylkknen ◽  
...  

1989 ◽  
Vol 86 (S1) ◽  
pp. S75-S75 ◽  
Author(s):  
D. O'Shaughnessy ◽  
V. Gupta ◽  
M. Lennig ◽  
F. Seitz ◽  
P. Mermelstein

2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Edvin Pakoci ◽  
Branislav Popović ◽  
Darko Pekar

Serbian is in a group of highly inflective and morphologically rich languages that use a lot of different word suffixes to express different grammatical, syntactic, or semantic features. This kind of behaviour usually produces a lot of recognition errors, especially in large vocabulary systems—even when, due to good acoustical matching, the correct lemma is predicted by the automatic speech recognition system, often a wrong word ending occurs, which is nevertheless counted as an error. This effect is larger for contexts not present in the language model training corpus. In this manuscript, an approach which takes into account different morphological categories of words for language modeling is examined, and the benefits in terms of word error rates and perplexities are presented. These categories include word type, word case, grammatical number, and gender, and they were all assigned to words in the system vocabulary, where applicable. These additional word features helped to produce significant improvements in relation to the baseline system, both for n-gram-based and neural network-based language models. The proposed system can help overcome a lot of tedious errors in a large vocabulary system, for example, for dictation, both for Serbian and for other languages with similar characteristics.


Sign in / Sign up

Export Citation Format

Share Document