scholarly journals A Relationship: Word Alignment, Phrase Table, and Translation Quality

2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Liang Tian ◽  
Derek F. Wong ◽  
Lidia S. Chao ◽  
Francisco Oliveira

In the last years, researchers conducted several studies to evaluate the machine translation quality based on the relationship between word alignments and phrase table. However, existing methods usually employ ad-hoc heuristics without theoretical support. So far, there is no discussion from the aspect of providing a formula to describe the relationship among word alignments, phrase table, and machine translation performance. In this paper, on one hand, we focus on formulating such a relationship for estimating the size of extracted phrase pairs given one or more word alignment points. On the other hand, a corpus-motivated pruning technique is proposed to prune the default large phrase table. Experiment proves that the deduced formula is feasible, which not only can be used to predict the size of the phrase table, but also can be a valuable reference for investigating the relationship between the translation performance and phrase tables based on different links of word alignment. The corpus-motivated pruning results show that nearly 98% of phrases can be reduced without any significant loss in translation quality.

Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1589
Author(s):  
Yongkeun Hwang ◽  
Yanghoon Kim ◽  
Kyomin Jung

Neural machine translation (NMT) is one of the text generation tasks which has achieved significant improvement with the rise of deep neural networks. However, language-specific problems such as handling the translation of honorifics received little attention. In this paper, we propose a context-aware NMT to promote translation improvements of Korean honorifics. By exploiting the information such as the relationship between speakers from the surrounding sentences, our proposed model effectively manages the use of honorific expressions. Specifically, we utilize a novel encoder architecture that can represent the contextual information of the given input sentences. Furthermore, a context-aware post-editing (CAPE) technique is adopted to refine a set of inconsistent sentence-level honorific translations. To demonstrate the efficacy of the proposed method, honorific-labeled test data is required. Thus, we also design a heuristic that labels Korean sentences to distinguish between honorific and non-honorific styles. Experimental results show that our proposed method outperforms sentence-level NMT baselines both in overall translation quality and honorific translations.


2020 ◽  
Vol 34 (05) ◽  
pp. 8886-8893
Author(s):  
Kai Song ◽  
Kun Wang ◽  
Heng Yu ◽  
Yue Zhang ◽  
Zhongqiang Huang ◽  
...  

We investigate the task of constraining NMT with pre-specified translations, which has practical significance for a number of research and industrial applications. Existing works impose pre-specified translations as lexical constraints during decoding, which are based on word alignments derived from target-to-source attention weights. However, multiple recent studies have found that word alignment derived from generic attention heads in the Transformer is unreliable. We address this problem by introducing a dedicated head in the multi-head Transformer architecture to capture external supervision signals. Results on five language pairs show that our method is highly effective in constraining NMT with pre-specified translations, consistently outperforming previous methods in translation quality.


2019 ◽  
Vol 9 (10) ◽  
pp. 2036
Author(s):  
Jinyi Zhang ◽  
Tadahiro Matsumoto

The translation quality of Neural Machine Translation (NMT) systems depends strongly on the training data size. Sufficient amounts of parallel data are, however, not available for many language pairs. This paper presents a corpus augmentation method, which has two variations: one is for all language pairs, and the other is for the Chinese-Japanese language pair. The method uses both source and target sentences of the existing parallel corpus and generates multiple pseudo-parallel sentence pairs from a long parallel sentence pair containing punctuation marks as follows: (1) split the sentence pair into parallel partial sentences; (2) back-translate the target partial sentences; and (3) replace each partial sentence in the source sentence with the back-translated target partial sentence to generate pseudo-source sentences. The word alignment information, which is used to determine the split points, is modified with “shared Chinese character rates” in segments of the sentence pairs. The experiment results of the Japanese-Chinese and Chinese-Japanese translation with ASPEC-JC (Asian Scientific Paper Excerpt Corpus, Japanese-Chinese) show that the method substantially improves translation performance. We also supply the code (see Supplementary Materials) that can reproduce our proposed method.


2019 ◽  
Vol 64 (2) ◽  
pp. 231-261
Author(s):  
Daša Munková ◽  
Oľga Wrede ◽  
Jakub Absolon

Summary The aim of the study is to compare translation quality and effectiveness (translator productivity) using measures of the automatic evaluation of machine translation output. The examined translation(s) was a legal text, translated from Slovak (mother tongue) into German. We distinguish human translation (HT), machine translation (MT) and post-edited MT (PEMT). For the evaluation we used our own tool, wherein were implemented the metrics of automatic MT evaluation. Analysis of Variance and Multiply Comparisons (Tukey HSD test) were applied to textual data. The results proved that in terms of productivity, regarding the expert and terminological knowledge of human translators, post-editing of MT is more effective than HT. On the other hand, in terms of quality (sentence structure or stylistics), regardless of time, HT is of a higher quality then MT, but similar to PEMT. The post-editors were affected by grammar and by stylistics of MT output, but post-edited a larger volume of MT than human translators translated.


2020 ◽  
Vol 8 ◽  
pp. 710-725
Author(s):  
Benjamin Marie ◽  
Atsushi Fujita

Neural machine translation (NMT) systems are usually trained on clean parallel data. They can perform very well for translating clean in-domain texts. However, as demonstrated by previous work, the translation quality significantly worsens when translating noisy texts, such as user-generated texts (UGT) from online social media. Given the lack of parallel data of UGT that can be used to train or adapt NMT systems, we synthesize parallel data of UGT, exploiting monolingual data of UGT through crosslingual language model pre-training and zero-shot NMT systems. This paper presents two different but complementary approaches: One alters given clean parallel data into UGT-like parallel data whereas the other generates translations from monolingual data of UGT. On the MTNT translation tasks, we show that our synthesized parallel data can lead to better NMT systems for UGT while making them more robust in translating texts from various domains and styles.


2021 ◽  
Vol 30 (1) ◽  
pp. 980-987
Author(s):  
Jinfeng Xue

Abstract Based on neural machine translation, this article introduced the ConvS2S system and transformer system, designed a semantic sharing combined transformer system to improve translation quality, and compared the three systems on the NIST dataset. The results showed that the operation speed of the semantic sharing combined transformer system was the highest, reaching 3934.27 words per second; the BLEU value of the ConvS2S system was the smallest, followed by the transformer system and the semantic sharing combined transformer system. Taking NIST08 as an example, the BLEU values of the designed system were 4.74 and 1.49 higher than the other two systems. The analysis of examples showed that the semantic sharing combined transformer had higher translation quality. The experimental results show that the proposed system is reliable in English content translation and can be further promoted and applied in practice.


2021 ◽  
pp. 1-11
Author(s):  
Quan Du ◽  
Kai Feng ◽  
Chen Xu ◽  
Tong Xiao ◽  
Jingbo Zhu

Recently, many efforts have been devoted to speeding up neural machine translation models. Among them, the non-autoregressive translation (NAT) model is promising because it removes the sequential dependence on the previously generated tokens and parallelizes the generation process of the entire sequence. On the other hand, the autoregressive translation (AT) model in general achieves a higher translation accuracy than the NAT counterpart. Therefore, a natural idea is to fuse the AT and NAT models to seek a trade-off between inference speed and translation quality. This paper proposes an ARF-NAT model (NAT with auxiliary representation fusion) to introduce the merit of a shallow AT model to an NAT model. Three functions are designed to fuse the auxiliary representation into the decoder of the NAT model. Experimental results show that ARF-NAT outperforms the NAT baseline by 5.26 BLEU scores on the WMT’14 German-English task with a significant speedup (7.58 times) over several strong AT baselines.


2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Jae-Hoon Kim ◽  
Hong-Seok Kwon ◽  
Hyeong-Won Seo

A pivot-based approach for bilingual lexicon extraction is based on the similarity of context vectors represented by words in a pivot language like English. In this paper, in order to show validity and usability of the pivot-based approach, we evaluate the approach in company with two different methods for estimating context vectors: one estimates them from two parallel corpora based on word association between source words (resp., target words) and pivot words and the other estimates them from two parallel corpora based on word alignment tools for statistical machine translation. Empirical results on two language pairs (e.g., Korean-Spanish and Korean-French) have shown that the pivot-based approach is very promising for resource-poor languages and this approach observes its validity and usability. Furthermore, for words with low frequency, our method is also well performed.


Target ◽  
2016 ◽  
Vol 28 (2) ◽  
pp. 206-221 ◽  
Author(s):  
Aljoscha Burchardt ◽  
Arle Lommel ◽  
Lindsay Bywood ◽  
Kim Harris ◽  
Maja Popović

Abstract The volume of Audiovisual Translation (AVT) is increasing to meet the rising demand for data that needs to be accessible around the world. Machine Translation (MT) is one of the most innovative technologies to be deployed in the field of translation, but it is still too early to predict how it can support the creativity and productivity of professional translators in the future. Currently, MT is more widely used in (non-AV) text translation than in AVT. In this article, we discuss MT technology and demonstrate why its use in AVT scenarios is particularly challenging. We also present some potentially useful methods and tools for measuring MT quality that have been developed primarily for text translation. The ultimate objective is to bridge the gap between the tech-savvy AVT community, on the one hand, and researchers and developers in the field of high-quality MT, on the other.


2007 ◽  
Vol 33 (3) ◽  
pp. 293-303 ◽  
Author(s):  
Alexander Fraser ◽  
Daniel Marcu

Automatic word alignment plays a critical role in statistical machine translation. Unfortunately, the relationship between alignment quality and statistical machine translation performance has not been well understood. In the recent literature, the alignment task has frequently been decoupled from the translation task and assumptions have been made about measuring alignment quality for machine translation which, it turns out, are not justified. In particular, none of the tens of papers published over the last five years has shown that significant decreases in alignment error rate (AER) result in significant increases in translation performance. This paper explains this state of affairs and presents steps towards measuring alignment quality in a way which is predictive of statistical machine translation performance.


Sign in / Sign up

Export Citation Format

Share Document