scholarly journals Hybrid CTC-Attention Network-Based End-to-End Speech Recognition System for Korean Language

Author(s):  
Hosung Park ◽  
Changmin Kim ◽  
Hyunsoo Son ◽  
Soonshin Seo ◽  
Ji-Hwan Kim

In this study, an automatic end-to-end speech recognition system based on hybrid CTC-attention network for Korean language is proposed. Deep neural network/hidden Markov model (DNN/HMM)-based speech recognition system has driven dramatic improvement in this area. However, it is difficult for non-experts to develop speech recognition for new applications. End-to-end approaches have simplified speech recognition system into a single-network architecture. These approaches can develop speech recognition system that does not require expert knowledge. In this paper, we propose hybrid CTC-attention network as end-to-end speech recognition model for Korean language. This model effectively utilizes a CTC objective function during attention model training. This approach improves the performance in terms of speech recognition accuracy as well as training speed. In most languages, end-to-end speech recognition uses characters as output labels. However, for Korean, character-based end-to-end speech recognition is not an efficient approach because Korean language has 11,172 possible numbers of characters. The number is relatively large compared to other languages. For example, English has 26 characters, and Japanese has 50 characters. To address this problem, we utilize Korean 49 graphemes as output labels. Experimental result shows 10.02% character error rate (CER) when 740 hours of Korean training data are used.

10.2196/18677 ◽  
2020 ◽  
Vol 8 (6) ◽  
pp. e18677
Author(s):  
Weifeng Fu

Background Speech recognition is a technology that enables machines to understand human language. Objective In this study, speech recognition of isolated words from a small vocabulary was applied to the field of mental health counseling. Methods A software platform was used to establish a human-machine chat for psychological counselling. The software uses voice recognition technology to decode the user's voice information. The software system analyzes and processes the user's voice information according to many internal related databases, and then gives the user accurate feedback. For users who need psychological treatment, the system provides them with psychological education. Results The speech recognition system included features such as speech extraction, endpoint detection, feature value extraction, training data, and speech recognition. Conclusions The Hidden Markov Model was adopted, based on multithread programming under a VC2005 compilation environment, to realize the parallel operation of the algorithm and improve the efficiency of speech recognition. After the design was completed, simulation debugging was performed in the laboratory. The experimental results showed that the designed program met the basic requirements of a speech recognition system.


2019 ◽  
Vol 8 (3) ◽  
pp. 7827-7831

Kannada is the regional language of India spoken in Karnataka. This paper presents development of continuous kannada speech recognition system using monophone modelling and triphone modelling using HTK. Mel Frequency Cepstral Coefficient (MFCC) is used as feature extractor, exploits cepstral and perceptual frequency scale leads good recognition accuracy. Hidden Markov Model is used as classifier. In this paper Gaussian mixture splitting is done that captures the variations of the phones. The paper presents performance of continuous Kannada Automatic Speech Recognition (ASR) system with respect to 2, 4,8,16 and 32 Gaussian mixtures with monophone and context dependent tri-phone modelling. The experimental result shows that good recognition accuracy is achieved for context dependent tri-phone modelling than monophone modelling as the number Gaussian mixture is increased.


2020 ◽  
Author(s):  
Prithvi R.R. Gudepu ◽  
Gowtham P. Vadisetti ◽  
Abhishek Niranjan ◽  
Kinnera Saranu ◽  
Raghava Sarma ◽  
...  

Information ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 62 ◽  
Author(s):  
Eshete Derb Emiru ◽  
Shengwu Xiong ◽  
Yaxing Li ◽  
Awet Fesseha ◽  
Moussa Diallo

Out-of-vocabulary (OOV) words are the most challenging problem in automatic speech recognition (ASR), especially for morphologically rich languages. Most end-to-end speech recognition systems are performed at word and character levels of a language. Amharic is a poorly resourced but morphologically rich language. This paper proposes hybrid connectionist temporal classification with attention end-to-end architecture and a syllabification algorithm for Amharic automatic speech recognition system (AASR) using its phoneme-based subword units. This algorithm helps to insert the epithetic vowel እ[ɨ], which is not included in our Grapheme-to-Phoneme (G2P) conversion algorithm developed using consonant–vowel (CV) representations of Amharic graphemes. The proposed end-to-end model was trained in various Amharic subwords, namely characters, phonemes, character-based subwords, and phoneme-based subwords generated by the byte-pair-encoding (BPE) segmentation algorithm. Experimental results showed that context-dependent phoneme-based subwords tend to result in more accurate speech recognition systems than the character-based, phoneme-based, and character-based subword counterparts. Further improvement was also obtained in proposed phoneme-based subwords with the syllabification algorithm and SpecAugment data augmentation technique. The word error rate (WER) reduction was 18.38% compared to character-based acoustic modeling with the word-based recurrent neural network language modeling (RNNLM) baseline. These phoneme-based subword models are also useful to improve machine and speech translation tasks.


2011 ◽  
Vol 1 (1) ◽  
pp. 9-13
Author(s):  
Pavithra M ◽  
Chinnasamy G ◽  
Azha Periasamy

A Speech recognition system requires a combination of various techniques and algorithms, each of which performs a specific task for achieving the main goal of the system. Speech recognition performance can be enhanced by selecting the proper acoustic model. In this work, the feature extraction and matching is done by SKPCA with Unsupervised learning algorithm and maximum probability. SKPCA reduces the data maximization of the model. It represents a sparse solution for KPCA, because the original data can be reduced considering the weights, i.e., the weights show the vectors which most influence the maximization. Unsupervised learning algorithm is implemented to find the suitable representation of the labels and maximum probability is used to maximize thenormalized acoustic likelihood of the most likely state sequences of training data. The experimental results show the efficiency of SKPCA technique with the proposed approach and maximum probability produce the great performance in the speech recognition system.


Sign in / Sign up

Export Citation Format

Share Document