Neural Machine Translation with Word Embedding Transferred from Language Model

2019 ◽  
Vol 20 (11) ◽  
pp. 2211-2216 ◽  
Author(s):  
Chanung Jeong ◽  
Heeyoul Choi
2020 ◽  
Vol 8 ◽  
pp. 710-725
Author(s):  
Benjamin Marie ◽  
Atsushi Fujita

Neural machine translation (NMT) systems are usually trained on clean parallel data. They can perform very well for translating clean in-domain texts. However, as demonstrated by previous work, the translation quality significantly worsens when translating noisy texts, such as user-generated texts (UGT) from online social media. Given the lack of parallel data of UGT that can be used to train or adapt NMT systems, we synthesize parallel data of UGT, exploiting monolingual data of UGT through crosslingual language model pre-training and zero-shot NMT systems. This paper presents two different but complementary approaches: One alters given clean parallel data into UGT-like parallel data whereas the other generates translations from monolingual data of UGT. On the MTNT translation tasks, we show that our synthesized parallel data can lead to better NMT systems for UGT while making them more robust in translating texts from various domains and styles.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yanping Ye

At the level of English resource vocabulary, due to the lack of vocabulary alignment structure, the translation of neural machine translation has the problem of unfaithfulness. This paper proposes a framework that integrates vocabulary alignment structure for neural machine translation at the vocabulary level. Under the proposed framework, the neural machine translation decoder receives external vocabulary alignment information during each step of the decoding process to further alleviate the problem of missing vocabulary alignment structure. Specifically, this article uses the word alignment structure of statistical machine translation as the external vocabulary alignment information and introduces it into the decoding step of neural machine translation. The model is mainly based on neural machine translation, and the statistical machine translation vocabulary alignment structure is integrated on the basis of neural networks and continuous expression of words. In the model decoding stage, the statistical machine translation system provides appropriate vocabulary alignment information based on the decoding information of the neural machine translation and recommends vocabulary based on the vocabulary alignment information to guide the neural machine translation decoder to more accurately estimate its vocabulary in the target language. From the aspects of data processing methods and machine translation technology, experiments are carried out to compare the data processing methods based on language model and sentence similarity and the effectiveness of machine translation models based on fusion principles. Comparative experiment results show that the data processing method based on language model and sentence similarity effectively guarantees data quality and indirectly improves the algorithm performance of machine translation model; the translation effect of neural machine translation model integrated with statistical machine translation vocabulary alignment structure is compared with other models.


2021 ◽  
Author(s):  
Mengqi Miao ◽  
Fandong Meng ◽  
Yijin Liu ◽  
Xiao-Hua Zhou ◽  
Jie Zhou

2020 ◽  
Vol 30 (01) ◽  
pp. 2050001
Author(s):  
Takumi Maruyama ◽  
Kazuhide Yamamoto

Inspired by machine translation task, recent text simplification approaches regard a task as a monolingual text-to-text generation, and neural machine translation models have significantly improved the performance of simplification tasks. Although such models require a large-scale parallel corpus, such corpora for text simplification are very few in number and smaller in size compared to machine translation task. Therefore, we have attempted to facilitate the training of simplification rewritings using pre-training from a large-scale monolingual corpus such as Wikipedia articles. In addition, we propose a translation language model to seamlessly conduct a fine-tuning of text simplification from the pre-training of the language model. The experimental results show that the translation language model substantially outperforms a state-of-the-art model under a low-resource setting. In addition, a pre-trained translation language model with only 3000 supervised examples can achieve a performance comparable to that of the state-of-the-art model using 30,000 supervised examples.


Author(s):  
Sho Takase ◽  
Jun Suzuki ◽  
Masaaki Nagata

This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction (Wieting et al. 2016). Our proposed method constructs word embeddings from character ngram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks


2017 ◽  
Vol 45 ◽  
pp. 137-148 ◽  
Author(s):  
Caglar Gulcehre ◽  
Orhan Firat ◽  
Kelvin Xu ◽  
Kyunghyun Cho ◽  
Yoshua Bengio

Sign in / Sign up

Export Citation Format

Share Document