scholarly journals Source side pre-ordering using recurrent neural networks for English-Myanmar machine translation

Author(s):  
May Kyi Nyein ◽  
Khin Mar Soe

Word reordering has remained one of the challenging problems for machine translation when translating between language pairs with different word orders e.g. English and Myanmar. Without reordering between these languages, a source sentence may be translated directly with similar word order and translation can not be meaningful. Myanmar is a subject-objectverb (SOV) language and an effective reordering is essential for translation. In this paper, we applied a pre-ordering approach using recurrent neural networks to pre-order words of the source Myanmar sentence into target English’s word order. This neural pre-ordering model is automatically derived from parallel word-aligned data with syntactic and lexical features based on dependency parse trees of the source sentences. This can generate arbitrary permutations that may be non-local on the sentence and can be combined into English-Myanmar machine translation. We exploited the model to reorder English sentences into Myanmar-like word order as a preprocessing stage for machine translation, obtaining improvements quality comparable to baseline rule-based pre-ordering approach on asian language treebank (ALT) corpus.

2013 ◽  
Vol 411-414 ◽  
pp. 1923-1929
Author(s):  
Ren Fen Hu ◽  
Yun Zhu ◽  
Yao Hong Jin ◽  
Jia Yong Chen

This paper presents a rule-based model to deal with the long distance reordering of Chinese special sentences. In this model, we firstly identify special prepositions and their syntax levels. After that, sentences are parsed and transformed to be much closer to English word order with reordering rules. We evaluate our method within a patent MT system, which shows a great advantage over reordering with statistical methods. With the presented reordering model, the performance of patent machine translation of Chinese special sentences is effectually improved.


Author(s):  
Richard Socher ◽  
Andrej Karpathy ◽  
Quoc V. Le ◽  
Christopher D. Manning ◽  
Andrew Y. Ng

Previous work on Recursive Neural Networks (RNNs) shows that these models can produce compositional feature vectors for accurately representing and classifying sentences or images. However, the sentence vectors of previous models cannot accurately represent visually grounded meaning. We introduce the DT-RNN model which uses dependency trees to embed sentences into a vector space in order to retrieve images that are described by those sentences. Unlike previous RNN-based models which use constituency trees, DT-RNNs naturally focus on the action and agents in a sentence. They are better able to abstract from the details of word order and syntactic expression. DT-RNNs outperform other recursive and recurrent neural networks, kernelized CCA and a bag-of-words baseline on the tasks of finding an image that fits a sentence description and vice versa. They also give more similar representations to sentences that describe the same image.


2019 ◽  
Vol 277 ◽  
pp. 02004
Author(s):  
Middi Venkata Sai Rishita ◽  
Middi Appala Raju ◽  
Tanvir Ahmed Harris

Machine Translation is the translation of text or speech by a computer with no human involvement. It is a popular topic in research with different methods being created, like rule-based, statistical and examplebased machine translation. Neural networks have made a leap forward to machine translation. This paper discusses the building of a deep neural network that functions as a part of end-to-end translation pipeline. The completed pipeline would accept English text as input and return the French Translation. The project has three main parts which are preprocessing, creation of models and Running the model on English Text.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 135284-135295
Author(s):  
Zhao Duan ◽  
Taiping Zhang ◽  
Jin Tan ◽  
Xiaoliu Luo

Sign in / Sign up

Export Citation Format

Share Document