scholarly journals Inflection rules for Marathi to English in rule based machine translation

Author(s):  
Namrata G Kharate ◽  
Varsha H Patil

Machine translation is important application in natural language processing. Machine translation means translation from source language to target language to save the meaning of the sentence. A large amount of research is going on in the area of machine translation. However, research with machine translation remains highly localized to the particular source and target languages as they differ syntactically and morphologically. Appropriate inflections result correct translation. This paper elaborates the rules for inflecting the parts-of-speech and implements the inflection for Marathi to English translation. The inflection of nouns, pronouns, verbs, adjectives are carried out on the basis of semantics of the sentence. The results are discussed with examples.

2020 ◽  
Vol 13 (4) ◽  
pp. 1
Author(s):  
Mohammed M. Abu Shquier

Translation from/to Arabic has been widely studied recently. This study focuses on the translation of Arabic as a source language (SL) to Malay as a target language (TL). The proposed prototype will be conducted to map the SL ”meaning”with the most equivalent translation in the TL. In this paper, we will investigate Arabic-Malay Machine Translation features (i.e., syntactic, semantic, and morphology), our proposed method aims at building a robust lexical Machine Translation prototype namely (AMMT). The paper proposes an ongoing research for building a successful Arabic-Malay MT engine. Human judgment and bleu evaluation have been used for evaluation purposes, The result of the first experiment prove that our system(AMMT) has outperformed several well-regarded MT systems by an average of 98, while the second experiment shows an average score of 1-gram, 2-gram and 3-gram as 0.90, 0.87 and 0.88 respectively. This result could be considered as a contribution to the domain of natural language processing (NLP).


2017 ◽  
Vol 108 (1) ◽  
pp. 257-269 ◽  
Author(s):  
Nasser Zalmout ◽  
Nizar Habash

AbstractTokenization is very helpful for Statistical Machine Translation (SMT), especially when translating from morphologically rich languages. Typically, a single tokenization scheme is applied to the entire source-language text and regardless of the target language. In this paper, we evaluate the hypothesis that SMT performance may benefit from different tokenization schemes for different words within the same text, and also for different target languages. We apply this approach to Arabic as a source language, with five target languages of varying morphological complexity: English, French, Spanish, Russian and Chinese. Our results show that different target languages indeed require different source-language schemes; and a context-variable tokenization scheme can outperform a context-constant scheme with a statistically significant performance enhancement of about 1.4 BLEU points.


2019 ◽  
Vol 277 ◽  
pp. 02004
Author(s):  
Middi Venkata Sai Rishita ◽  
Middi Appala Raju ◽  
Tanvir Ahmed Harris

Machine Translation is the translation of text or speech by a computer with no human involvement. It is a popular topic in research with different methods being created, like rule-based, statistical and examplebased machine translation. Neural networks have made a leap forward to machine translation. This paper discusses the building of a deep neural network that functions as a part of end-to-end translation pipeline. The completed pipeline would accept English text as input and return the French Translation. The project has three main parts which are preprocessing, creation of models and Running the model on English Text.


2018 ◽  
Vol 14 (1) ◽  
pp. 17-27
Author(s):  
Vimal Kumar K. ◽  
Divakar Yadav

Corpus based natural language processing has emerged with great success in recent years. It is not only used for languages like English, French, Spanish, and Hindi but also is widely used for languages like Tamil, Telugu etc. This paper focuses to increase the accuracy of machine translation from Hindi to Tamil by considering the word's sense as well as its part-of-speech. This system works on word by word translation from Hindi to Tamil language which makes use of additional information such as the preceding words, the current word's part of speech and the word's sense itself. For such a translation system, the frequency of words occurring in the corpus, the tagging of the input words and the probability of the preceding word of the tagged words are required. Wordnet is used to identify various synonym for the words specified in the source language. Among these words, the one which is more relevant to the word specified in source language is considered for the translation to target language. The introduction of the additional information such as part-of-speech tag, preceding word information and semantic analysis has greatly improved the accuracy of the system.


2016 ◽  
Vol 55 ◽  
pp. 209-248 ◽  
Author(s):  
Jörg Tiedemann ◽  
Zeljko Agić

How do we parse the languages for which no treebanks are available? This contribution addresses the cross-lingual viewpoint on statistical dependency parsing, in which we attempt to make use of resource-rich source language treebanks to build and adapt models for the under-resourced target languages. We outline the benefits, and indicate the drawbacks of the current major approaches. We emphasize synthetic treebanking: the automatic creation of target language treebanks by means of annotation projection and machine translation. We present competitive results in cross-lingual dependency parsing using a combination of various techniques that contribute to the overall success of the method. We further include a detailed discussion about the impact of part-of-speech label accuracy on parsing results that provide guidance in practical applications of cross-lingual methods for truly under-resourced languages.


Author(s):  
Zhenpeng Chen ◽  
Sheng Shen ◽  
Ziniu Hu ◽  
Xuan Lu ◽  
Qiaozhu Mei ◽  
...  

Sentiment classification typically relies on a large amount of labeled data. In practice, the availability of labels is highly imbalanced among different languages. To tackle this problem, cross-lingual sentiment classification approaches aim to transfer knowledge learned from one language that has abundant labeled examples (i.e., the source language, usually English) to another language with fewer labels (i.e., the target language). The source and the target languages are usually bridged through off-the-shelf machine translation tools. Through such a channel, cross-language sentiment patterns can be successfully learned from English and transferred into the target languages. This approach, however, often fails to capture sentiment knowledge specific to the target language. In this paper, we employ emojis, which are widely available in many languages, as a new channel to learn both the cross-language and the language-specific sentiment patterns. We propose a novel representation learning method that uses emoji prediction as an instrument to learn respective sentiment-aware representations for each language. The learned representations are then integrated to facilitate cross-lingual sentiment classification.


2002 ◽  
Vol 01 (02) ◽  
pp. 349-366 ◽  
Author(s):  
FUJI REN ◽  
HONGCHI SHI

One of the most difficult problems in dialogue machine translation is to correctly translate irregular expressions in natural conversations such as ungrammatical, incomplete, or ill-formed sentences. However, most existing machine translation systems reject utterances including irregular expressions. In this paper, we present a dialogue machine translation approach based on a cooperative distributed natural language processing model to attack the complex machine translation problem. In this approach, different types of translation processors are used in the analysis of the original language and the generation of the target language. The idea of combining multiple machine translation engines provides a new effective way to increase the success rate and quality of dialogue machine translation. A dialogue machine translation using multiple processors (DMTMP) system has been built using the following machine translation processors: (i) Robust Parser based Translation Processor, (ii) Example based Translation Processor, (iii) Family Modal based Translation Processor, and (iv) Super Function based Translation Processor. DMTMP is used in a practical machine translation environment called SWKJC. Experiments show that the approach presented in this paper is effective in implementation of robust dialogue machine translation systems.


2014 ◽  
Vol 102 (1) ◽  
pp. 47-56 ◽  
Author(s):  
Rosa Rudolf

Abstract We present Depfix, an open-source system for automatic post-editing of phrase-based machine translation outputs. Depfix employs a range of natural language processing tools to obtain analyses of the input sentences, and uses a set of rules to correct common or serious errors in machine translation outputs. Depfix is currently implemented only for English-to-Czech translation direction, but extending it to other languages is planned.


2021 ◽  
Vol 11 (18) ◽  
pp. 8737
Author(s):  
Jiun Oh ◽  
Yong-Suk Choi

This work uses sequence-to-sequence (seq2seq) models pre-trained on monolingual corpora for machine translation. We pre-train two seq2seq models with monolingual corpora for the source and target languages, then combine the encoder of the source language model and the decoder of the target language model, i.e., the cross-connection. We add an intermediate layer between the pre-trained encoder and the decoder to help the mapping of each other since the modules are pre-trained completely independently. These monolingual pre-trained models can work as a multilingual pre-trained model because one model can be cross-connected with another model pre-trained on any other language, while their capacity is not affected by the number of languages. We will demonstrate that our method improves the translation performance significantly over the random baseline. Moreover, we will analyze the appropriate choice of the intermediate layer, the importance of each part of a pre-trained model, and the performance change along with the size of the bitext.


2020 ◽  
pp. 3-17
Author(s):  
Peter Nabende

Natural Language Processing for under-resourced languages is now a mainstream research area. However, there are limited studies on Natural Language Processing applications for many indigenous East African languages. As a contribution to covering the current gap of knowledge, this paper focuses on evaluating the application of well-established machine translation methods for one heavily under-resourced indigenous East African language called Lumasaaba. Specifically, we review the most common machine translation methods in the context of Lumasaaba including both rule-based and data-driven methods. Then we apply a state of the art data-driven machine translation method to learn models for automating translation between Lumasaaba and English using a very limited data set of parallel sentences. Automatic evaluation results show that a transformer-based Neural Machine Translation model architecture leads to consistently better BLEU scores than the recurrent neural network-based models. Moreover, the automatically generated translations can be comprehended to a reasonable extent and are usually associated with the source language input.


Sign in / Sign up

Export Citation Format

Share Document