scholarly journals Bodo to English Machine Translation through Transliteration

Machine Translation (MT) is a technique that automatically translates text from one natural language to another using machine like computer. Machine Transliteration (MTn) is also a technique that converts the script of text from source language to target language without changing the pronunciation of the source text. Both the MT and MTn are the challenging research task in the field of Natural Language Processing (NLP) and Computational Linguistics (CL) globally. English is a high resource natural language, whereas Bodo is a low resource natural language. Though Bodo is a recognized language of India; still not much research work has been done on MT and MTn systems due to the low resources. The primary objective of this paper is to develop Bodo to English Machine Translation system with the help of Bodo to English Machine Transliteration system. The Bodo to English MT system has been developed using the Phrase-based Statistical Machine Translation technique for General and News domains of Bodo-English parallel text corpus. The Bodo to English MTn system has been developed using the Hybrid technique for General and News domains of Bodo-English parallel transliterated words/terms. The translation accuracy of the MT system has been evaluated using BLEU technique

2021 ◽  
pp. 139-149

Language, as the information carrier, has become the most significant means for humans to communicate. However, it has been considered as the barrier of communications between people from different countries. The problem of converting a language quickly and efficiently has become a problem of common concern for humanity. In fact, the demand for language translation has greatly increased in recent times due to effect of cross-regional communication and the need for information exchange. Most material needs to be translated, including scientific and technical documentation, instruction manuals, legal documents, textbooks, publicity leaflets, newspaper reports, etc. The issue is challenging and difficult but mostly it is tedious and repetitive and requires consistency and accuracy. It is becoming difficult for professional translators to meet the increasing demands of translation. In such a situation, the machine translation can be used as a substitute. Machine Translation is the process of converting a natural source language into another natural target language by computer. It is a branch of natural language processing and it has a close relationship with computational linguistics and natural language understanding. With the rapid development of the Internet and the integration of the world economy, how to overcome the barrier of language has become a common problem of the international community. This paper offers an overview of Machine Translation (MT) including the history of MT, linguistic problems of MT, the problem of multiple meanings in MT, syntactic transformations in MT, translation of phraseological combinations in MT systems.


Author(s):  
Sunita Warjri ◽  
Partha Pakray ◽  
Saralin A. Lyngdoh ◽  
Arnab Kumar Maji

Part-of-speech (POS) tagging is one of the research challenging fields in natural language processing (NLP). It requires good knowledge of a particular language with large amounts of data or corpora for feature engineering, which can lead to achieving a good performance of the tagger. Our main contribution in this research work is the designed Khasi POS corpus. Till date, there has been no form of any kind of Khasi corpus developed or formally developed. In the present designed Khasi POS corpus, each word is tagged manually using the designed tagset. Methods of deep learning have been used to experiment with our designed Khasi POS corpus. The POS tagger based on BiLSTM, combinations of BiLSTM with CRF, and character-based embedding with BiLSTM are presented. The main challenges of understanding and handling Natural Language toward Computational linguistics to encounter are anticipated. In the presently designed corpus, we have tried to solve the problems of ambiguities of words concerning their context usage, and also the orthography problems that arise in the designed POS corpus. The designed Khasi corpus size is around 96,100 tokens and consists of 6,616 distinct words. Initially, while running the first few sets of data of around 41,000 tokens in our experiment the taggers are found to yield considerably accurate results. When the Khasi corpus size has been increased to 96,100 tokens, we see an increase in accuracy rate and the analyses are more pertinent. As results, accuracy of 96.81% is achieved for the BiLSTM method, 96.98% for BiLSTM with CRF technique, and 95.86% for character-based with LSTM. Concerning substantial research from the NLP perspectives for Khasi, we also present some of the recently existing POS taggers and other NLP works on the Khasi language for comparative purposes.


2020 ◽  
Vol 13 (4) ◽  
pp. 407-435
Author(s):  
Jagroop Kaur ◽  
Jaswinder Singh

PurposeNormalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of problems that are not present in regular text. Recently, a considerable amount of work has been done in this direction, but mostly in the English language. People who do not speak English code mixed the text with their native language and posted text on social media using the Roman script. This kind of text further aggravates the problem of normalizing. This paper aims to discuss the concept of normalization with respect to code-mixed social media text, and a model has been proposed to normalize such text.Design/methodology/approachThe system is divided into two phases – candidate generation and most probable sentence selection. Candidate generation task is treated as machine translation task where the Roman text is treated as source language and Gurmukhi text is treated as the target language. Character-based translation system has been proposed to generate candidate tokens. Once candidates are generated, the second phase uses the beam search method for selecting the most probable sentence based on hidden Markov model.FindingsCharacter error rate (CER) and bilingual evaluation understudy (BLEU) score are reported. The proposed system has been compared with Akhar software and RB\_R2G system, which are also capable of transliterating Roman text to Gurmukhi. The performance of the system outperforms Akhar software. The CER and BLEU scores are 0.268121 and 0.6807939, respectively, for ill-formed text.Research limitations/implicationsIt was observed that the system produces dialectical variations of a word or the word with minor errors like diacritic missing. Spell checker can improve the output of the system by correcting these minor errors. Extensive experimentation is needed for optimizing language identifier, which will further help in improving the output. The language model also seeks further exploration. Inclusion of wider context, particularly from social media text, is an important area that deserves further investigation.Practical implicationsThe practical implications of this study are: (1) development of parallel dataset containing Roman and Gurmukhi text; (2) development of dataset annotated with language tag; (3) development of the normalizing system, which is first of its kind and proposes translation based solution for normalizing noisy social media text from Roman to Gurmukhi. It can be extended for any pair of scripts. (4) The proposed system can be used for better analysis of social media text. Theoretically, our study helps in better understanding of text normalization in social media context and opens the doors for further research in multilingual social media text normalization.Originality/valueExisting research work focus on normalizing monolingual text. This study contributes towards the development of a normalization system for multilingual text.


2019 ◽  
Vol 8 (4) ◽  
pp. 11099-11106

In recent days, all kinds of service based companies and business organization needs customer feedback. Nowadays, many customers share their opinion by online about the products or services which become a process of decision making from customer and also help in making the business model more robust. These customer reviews may assist to expand their business and gain trust of the customer. In order to analyze customer feedback about their products and customer intents, most businesses perform “Market Basket Analysis”. There are several existing techniques which have ignored the very essence of capturing and analyzing customer reviews for each product that has been purchased and it may switches over to other product which belongs to the same category. The existing techniques do not take into account regarding the switch over of product. Apriori algorithm alone may not predict accurately regarding which other products the person would buy along with a specified product simply based on the basket data. Sentimental analysis refers to the use of natural language processing (NLP), text analysis and computational linguistics to systematically identify, extract, quantify and study affective states and subjective information. The proposed research work considers product review analysis with Apriori algorithm based rule mining to determine the implicit association using sentiment analysis.


2018 ◽  
Vol 14 (1) ◽  
pp. 17-27
Author(s):  
Vimal Kumar K. ◽  
Divakar Yadav

Corpus based natural language processing has emerged with great success in recent years. It is not only used for languages like English, French, Spanish, and Hindi but also is widely used for languages like Tamil, Telugu etc. This paper focuses to increase the accuracy of machine translation from Hindi to Tamil by considering the word's sense as well as its part-of-speech. This system works on word by word translation from Hindi to Tamil language which makes use of additional information such as the preceding words, the current word's part of speech and the word's sense itself. For such a translation system, the frequency of words occurring in the corpus, the tagging of the input words and the probability of the preceding word of the tagged words are required. Wordnet is used to identify various synonym for the words specified in the source language. Among these words, the one which is more relevant to the word specified in source language is considered for the translation to target language. The introduction of the additional information such as part-of-speech tag, preceding word information and semantic analysis has greatly improved the accuracy of the system.


Webology ◽  
2021 ◽  
Vol 18 (Special Issue 02) ◽  
pp. 208-222
Author(s):  
Vikas Pandey ◽  
Dr.M.V. Padmavati ◽  
Dr. Ramesh Kumar

Machine Translation is a subfield of Natural language Processing (NLP) which uses to translate source language to target language. In this paper an attempt has been made to make a Hindi Chhattisgarhi machine translation system which is based on statistical approach. In the state of Chhattisgarh there is a long awaited need for Hindi to Chhattisgarhi machine translation system for converting Hindi into Chhattisgarhi especially for non Chhattisgarhi speaking people. In order to develop Hindi Chhattisgarhi statistical machine translation system an open source software called Moses is used. Moses is a statistical machine translation system and used to automatically train the translation model for Hindi Chhattisgarhi language pair called as parallel corpus. A collection of structured text to study linguistic properties is called corpus. This machine translation system works on parallel corpus of 40,000 Hindi-Chhattisgarhi bilingual sentences. In order to overcome translation problem related to proper noun and unknown words, a transliteration system is also embedded in it. These sentences are extracted from various domains like stories, novels, text books and news papers etc. This system is tested on 1000 sentences to check the grammatical correctness of sentences and it was found that an accuracy of 75% is achieved.


Author(s):  
Kyunghyun Cho

Deep learning has rapidly gained huge popularity among researchers in natural-language processing and computational linguistics in recent years. This chapter gives a comprehensive and detailed overview of recent deep-learning-based approaches to challenging problems in natural-language processing, specifically focusing on document classification, language modelling, and machine translation. At the end of the chapter, new opportunities in natural-language processing made possible by deep learning are discussed, which are multilingual and larger-context modelling.


Terminology ◽  
1994 ◽  
Vol 1 (1) ◽  
pp. 61-95 ◽  
Author(s):  
Blaise Nkwenti-Azeh

Special-language term formation is characterised, inter alia, by the frequent reuse of certain lexical items in the formation of new syntagmatic units and by conceptually motivated restrictions on the position which certain elements can occupy within a compound term. This paper describes how the positional and combinational features of the terminology of a given domain can be identified from relevant existing term lists and used as part of a corpus-based, automatic term-identification strategy within a natural-language processing (e.g., machine-translation) system. The methodology described is exemplified and supported with data from the field of satellite communications.


2013 ◽  
Vol 8 (3) ◽  
pp. 908-912 ◽  
Author(s):  
Sumita Rani ◽  
Dr. Vijay Luxmi

Machine Translation System is an important area in Natural Language Processing. The Direct MT system is based upon the utilization of syntactic and vocabulary similarities between more or few related natural languages. The relation between two or more languages is based upon their common parent language. The similarity between Punjabi and Hindi languages is due to their parent language Sanskrit. Punjabi and Hindi are closely related languages with lots of similarities in syntax and vocabulary. In the present paper, Direct Machine Translation System from Punjabi to Hindi has been developed and its output is evaluated in order to get the suitability of the system.


2020 ◽  
Vol 13 (4) ◽  
pp. 1
Author(s):  
Mohammed M. Abu Shquier

Translation from/to Arabic has been widely studied recently. This study focuses on the translation of Arabic as a source language (SL) to Malay as a target language (TL). The proposed prototype will be conducted to map the SL ”meaning”with the most equivalent translation in the TL. In this paper, we will investigate Arabic-Malay Machine Translation features (i.e., syntactic, semantic, and morphology), our proposed method aims at building a robust lexical Machine Translation prototype namely (AMMT). The paper proposes an ongoing research for building a successful Arabic-Malay MT engine. Human judgment and bleu evaluation have been used for evaluation purposes, The result of the first experiment prove that our system(AMMT) has outperformed several well-regarded MT systems by an average of 98, while the second experiment shows an average score of 1-gram, 2-gram and 3-gram as 0.90, 0.87 and 0.88 respectively. This result could be considered as a contribution to the domain of natural language processing (NLP).


Sign in / Sign up

Export Citation Format

Share Document