scholarly journals Structured-Based Curriculum Learning for End-to-End English-Japanese Speech Translation

Author(s):  
Takatomo Kano ◽  
Sakriani Sakti ◽  
Satoshi Nakamura
2021 ◽  
Author(s):  
Chen Xu ◽  
Xiaoqian Liu ◽  
Xiaowen Liu ◽  
Tiger Wang ◽  
Canan Huang ◽  
...  

Author(s):  
Jan Niehues ◽  
Elizabeth Salesky ◽  
Marco Turchi ◽  
Matteo Negri

2019 ◽  
Vol 7 ◽  
pp. 313-325 ◽  
Author(s):  
Matthias Sperber ◽  
Graham Neubig ◽  
Jan Niehues ◽  
Alex Waibel

Speech translation has traditionally been approached through cascaded models consisting of a speech recognizer trained on a corpus of transcribed speech, and a machine translation system trained on parallel texts. Several recent works have shown the feasibility of collapsing the cascade into a single, direct model that can be trained in an end-to-end fashion on a corpus of translated speech. However, experiments are inconclusive on whether the cascade or the direct model is stronger, and have only been conducted under the unrealistic assumption that both are trained on equal amounts of data, ignoring other available speech recognition and machine translation corpora. In this paper, we demonstrate that direct speech translation models require more data to perform well than cascaded models, and although they allow including auxiliary data through multi-task training, they are poor at exploiting such data, putting them at a severe disadvantage. As a remedy, we propose the use of end- to-end trainable models with two attention mechanisms, the first establishing source speech to source text alignments, the second modeling source to target text alignment. We show that such models naturally decompose into multi-task–trainable recognition and translation tasks and propose an attention-passing technique that alleviates error propagation issues in a previous formulation of a model with two attention stages. Our proposed model outperforms all examined baselines and is able to exploit auxiliary training data much more effectively than direct attentional models.


2020 ◽  
Vol 34 (05) ◽  
pp. 8417-8424 ◽  
Author(s):  
Yuchen Liu ◽  
Jiajun Zhang ◽  
Hao Xiong ◽  
Long Zhou ◽  
Zhongjun He ◽  
...  

Speech-to-text translation (ST), which translates source language speech into target language text, has attracted intensive attention in recent years. Compared to the traditional pipeline system, the end-to-end ST model has potential benefits of lower latency, smaller model size, and less error propagation. However, it is notoriously difficult to implement such a model without transcriptions as intermediate. Existing works generally apply multi-task learning to improve translation quality by jointly training end-to-end ST along with automatic speech recognition (ASR). However, different tasks in this method cannot utilize information from each other, which limits the improvement. Other works propose a two-stage model where the second model can use the hidden state from the first one, but its cascade manner greatly affects the efficiency of training and inference process. In this paper, we propose a novel interactive attention mechanism which enables ASR and ST to perform synchronously and interactively in a single model. Specifically, the generation of transcriptions and translations not only relies on its previous outputs but also the outputs predicted in the other task. Experiments on TED speech translation corpora have shown that our proposed model can outperform strong baselines on the quality of speech translation and achieve better speech recognition performances as well.


Sign in / Sign up

Export Citation Format

Share Document