Pinyin as a Feature of Neural Machine Translation for Chinese Speech Recognition Error Correction

Author(s):  
Dagao Duan ◽  
Shaohu Liang ◽  
Zhongming Han ◽  
Weijie Yang
2020 ◽  
Vol 10 (20) ◽  
pp. 7263
Author(s):  
Yong-Hyeok Lee ◽  
Dong-Won Jang ◽  
Jae-Bin Kim ◽  
Rae-Hong Park ◽  
Hyung-Min Park

Since attention mechanism was introduced in neural machine translation, attention has been combined with the long short-term memory (LSTM) or replaced the LSTM in a transformer model to overcome the sequence-to-sequence (seq2seq) problems with the LSTM. In contrast to the neural machine translation, audio–visual speech recognition (AVSR) may provide improved performance by learning the correlation between audio and visual modalities. As a result that the audio has richer information than the video related to lips, AVSR is hard to train attentions with balanced modalities. In order to increase the role of visual modality to a level of audio modality by fully exploiting input information in learning attentions, we propose a dual cross-modality (DCM) attention scheme that utilizes both an audio context vector using video query and a video context vector using audio query. Furthermore, we introduce a connectionist-temporal-classification (CTC) loss in combination with our attention-based model to force monotonic alignments required in AVSR. Recognition experiments on LRS2-BBC and LRS3-TED datasets showed that the proposed model with the DCM attention scheme and the hybrid CTC/attention architecture achieved at least a relative improvement of 7.3% on average in the word error rate (WER) compared to competing methods based on the transformer model.


2017 ◽  
Author(s):  
Nicholas Ruiz ◽  
Mattia Antonino Di Gangi ◽  
Nicola Bertoldi ◽  
Marcello Federico

2018 ◽  
Vol 25 (2) ◽  
pp. 167-199
Author(s):  
Yusuke Oda ◽  
Philip Arthur ◽  
Graham Neubig ◽  
Koichiro Yoshino ◽  
Satoshi Nakamura

Sign in / Sign up

Export Citation Format

Share Document