Context based machine translation with recurrent neural network for English–Amharic translation

Author(s):  
Yeabsira Asefa Ashengo ◽  
Rosa Tsegaye Aga ◽  
Surafel Lemma Abebe
Molecules ◽  
2017 ◽  
Vol 22 (10) ◽  
pp. 1732 ◽  
Author(s):  
Renzhi Cao ◽  
Colton Freitas ◽  
Leong Chan ◽  
Miao Sun ◽  
Haiqing Jiang ◽  
...  

Author(s):  
N Revathi

Abstract: Language is a main mode of communication, and translation is a critical tool for understanding information in a foreign language. Without the help of human translators, machine translation allows users to absorb unfamiliar linguistic material. The main goal of this project is to create a practical language translation from English to Hindi. Given its relevance and potential in the English-Hindi translation, machine translation is an efficient way to turn content into a new language without employing people. Among all available translation machines, Neural Machine Translation (NMT) is one of the most efficient ways. So, in this case, we're employing Sequence to Sequence Modeling, which includes the Recurrent Neural Network (RNN), Long and Short Term Memory (LSTM), and Encoder-Decoder methods. Deep Neural Network (DNN) comprehension and principles of deep learning, i.e. machine translation, are disclosed in the field of Natural Language Processing (NLP). In machine reclining techniques, DNN plays a crucial role. Keywords: Sequence to Sequence, Encoder-Decoder, Recurrent Neural Network, Long & Short term Memory, Deep Neural Network.


Author(s):  
Adebimpe Esan ◽  
John Oladosu ◽  
Christopher Oyeleye ◽  
Ibrahim Adeyanju ◽  
Olatayo Olaniyan ◽  
...  

Author(s):  
Sho Takase ◽  
Jun Suzuki ◽  
Masaaki Nagata

This paper proposes a novel Recurrent Neural Network (RNN) language model that takes advantage of character information. We focus on character n-grams based on research in the field of word embedding construction (Wieting et al. 2016). Our proposed method constructs word embeddings from character ngram embeddings and combines them with ordinary word embeddings. We demonstrate that the proposed method achieves the best perplexities on the language modeling datasets: Penn Treebank, WikiText-2, and WikiText-103. Moreover, we conduct experiments on application tasks: machine translation and headline generation. The experimental results indicate that our proposed method also positively affects these tasks


In this era of globalization, it is quite likely to come across people or community who do not share the same language for communication as us. To acknowledge the problems caused by this, we have machine translation systems being developed. Developers of several reputed organizations like Google LLC, have been working to bring algorithms to support machine translations using machine learning algorithms like Artificial Neural Network (ANN) in order to facilitate machine translation. Several Neural Machine Translations have been developed in this regard, but Recurrent Neural Network (RNN), on the other hand, has not grown much in this field. In our work, we have tried to bring RNN in the field of machine translations, in order to acknowledge the benefits of RNN over ANN. The results show how RNN is able to perform machine translations with proper accuracy.


2021 ◽  
Vol 7 (3) ◽  
pp. 488
Author(s):  
Wahyu Gunawan ◽  
Herry Sujaini ◽  
Tursina Tursina

Di Indonesia, penerapan mesin penerjemah masih banyak dilakukan dengan berbasis statistik khususnya dalam eksperimen penerjemahan bahasa daerah. Dalam beberapa tahun terakhir, mesin penerjemah jaringan saraf tiruan telah mencapai kesuksesan yang luar biasa dan menjadi metode pilihan baru dalam praktik mesin penerjemah. pada penelitian ini menggunakan mekanisme attention dari Bahdanau dan Luong dalam bahasa Indonesia ke bahasa Melayu Ketapang dengan data korpus paralel sejumlah 5000 baris kalimat. Hasil pengujian berdasarkan metode penambahan secara konsisten dengan jumlah epoch didapatkan nilai skor BLEU yaitu pada attention Bahdanau menghasilkan akurasi 35,96% tanpa out-of-vocabulary (OOV) dengan menggunakan jumlah epoch 40, sedangkan pada attention Luong menghasilkan akurasi 26,19% tanpa OOV menggunakan jumlah 30 epoch. Hasil pengujian berdasarkan k-fold cross validation didapatkan nilai rata-rata akurasi tertinggi sebesar 40,25% tanpa OOV untuk attention Bahdanau dan 30,38% tanpa OOV untuk attention Luong, sedangkan pengujian manual oleh dua orang ahli bahasa memperoleh nilai akurasi sebesar 78,17% dan 72,53%. 


Sign in / Sign up

Export Citation Format

Share Document