scholarly journals Recurrent Neural Network Techniques: Emphasis on Use in Neural Machine Translation

Informatica ◽  
2021 ◽  
Vol 45 (7) ◽  
Author(s):  
Dima Suleiman ◽  
Wael Etaiwi ◽  
Arafat Awajan
Molecules ◽  
2017 ◽  
Vol 22 (10) ◽  
pp. 1732 ◽  
Author(s):  
Renzhi Cao ◽  
Colton Freitas ◽  
Leong Chan ◽  
Miao Sun ◽  
Haiqing Jiang ◽  
...  

Author(s):  
N Revathi

Abstract: Language is a main mode of communication, and translation is a critical tool for understanding information in a foreign language. Without the help of human translators, machine translation allows users to absorb unfamiliar linguistic material. The main goal of this project is to create a practical language translation from English to Hindi. Given its relevance and potential in the English-Hindi translation, machine translation is an efficient way to turn content into a new language without employing people. Among all available translation machines, Neural Machine Translation (NMT) is one of the most efficient ways. So, in this case, we're employing Sequence to Sequence Modeling, which includes the Recurrent Neural Network (RNN), Long and Short Term Memory (LSTM), and Encoder-Decoder methods. Deep Neural Network (DNN) comprehension and principles of deep learning, i.e. machine translation, are disclosed in the field of Natural Language Processing (NLP). In machine reclining techniques, DNN plays a crucial role. Keywords: Sequence to Sequence, Encoder-Decoder, Recurrent Neural Network, Long & Short term Memory, Deep Neural Network.


In this era of globalization, it is quite likely to come across people or community who do not share the same language for communication as us. To acknowledge the problems caused by this, we have machine translation systems being developed. Developers of several reputed organizations like Google LLC, have been working to bring algorithms to support machine translations using machine learning algorithms like Artificial Neural Network (ANN) in order to facilitate machine translation. Several Neural Machine Translations have been developed in this regard, but Recurrent Neural Network (RNN), on the other hand, has not grown much in this field. In our work, we have tried to bring RNN in the field of machine translations, in order to acknowledge the benefits of RNN over ANN. The results show how RNN is able to perform machine translations with proper accuracy.


2021 ◽  
Vol 7 (3) ◽  
pp. 488
Author(s):  
Wahyu Gunawan ◽  
Herry Sujaini ◽  
Tursina Tursina

Di Indonesia, penerapan mesin penerjemah masih banyak dilakukan dengan berbasis statistik khususnya dalam eksperimen penerjemahan bahasa daerah. Dalam beberapa tahun terakhir, mesin penerjemah jaringan saraf tiruan telah mencapai kesuksesan yang luar biasa dan menjadi metode pilihan baru dalam praktik mesin penerjemah. pada penelitian ini menggunakan mekanisme attention dari Bahdanau dan Luong dalam bahasa Indonesia ke bahasa Melayu Ketapang dengan data korpus paralel sejumlah 5000 baris kalimat. Hasil pengujian berdasarkan metode penambahan secara konsisten dengan jumlah epoch didapatkan nilai skor BLEU yaitu pada attention Bahdanau menghasilkan akurasi 35,96% tanpa out-of-vocabulary (OOV) dengan menggunakan jumlah epoch 40, sedangkan pada attention Luong menghasilkan akurasi 26,19% tanpa OOV menggunakan jumlah 30 epoch. Hasil pengujian berdasarkan k-fold cross validation didapatkan nilai rata-rata akurasi tertinggi sebesar 40,25% tanpa OOV untuk attention Bahdanau dan 30,38% tanpa OOV untuk attention Luong, sedangkan pengujian manual oleh dua orang ahli bahasa memperoleh nilai akurasi sebesar 78,17% dan 72,53%. 


2017 ◽  
Vol 108 (1) ◽  
pp. 37-48 ◽  
Author(s):  
Praveen Dakwale ◽  
Christof Monz

AbstractNeural machine translation is a recently proposed approach which has shown competitive results to traditional MT approaches. Standard neural MT is an end-to-end neural network where the source sentence is encoded by a recurrent neural network (RNN) called encoder and the target words are predicted using another RNN known as decoder. Recently, various models have been proposed which replace the RNN encoder with a convolutional neural network (CNN). In this paper, we propose to augment the standard RNN encoder in NMT with additional convolutional layers in order to capture wider context in the encoder output. Experiments on English to German translation demonstrate that our approach can achieve significant improvements over a standard RNN-based baseline.


2021 ◽  
Vol 2021 (1) ◽  
pp. 935-946
Author(s):  
Muhammad Yusuf Aristyanto ◽  
Robert Kurniawan

Manusia sebagai makhluk sosial yang selalu ingin berhubungan dengan manusia lainnya memaksa manusia untuk saling berkomunikasi. Di sinilah peran bahasa menjadi amat penting, karena dengan adanya bahasa, maka akan dengan mudah mengerti apa yang ingin disampaikan oleh orang lain. Untuk itu, perlu adanya media yang dapat membantu memahami berbagai bahasa di dunia, salah satunya adalah mesin penerjemah. Salah satu metode yang dapat digunakan untuk membuat mesin penerjemah adalah Neural Machine Translation (NMT). NMT yang sekarang sudah ada masih memiliki berbagai kekurangan dan perlu dilakukan pengembangan lebih jauh. Diantaranya pada masalah overfitting yang membuat modelnya kurang bisa melakukan generalisasi pada data lain yang diujikan. Banyak hal yang mempengaruhi performa dari NMT tersebut, salah satunya adalah ukuran hyperparameter yang digunakan dan arsitektur model yang digunakan. Namun belum ada ukuran pasti yang dapat digunakan untuk menghasilkan model dengan performa yang terbaik. Sehingga penelitian ini bertujuan untuk mengembangkan arsitektur model NMT dan melakukan simulasi pada masing-masing hyperparameter Neural Network dan ukuran pada arsitektur modelnya, antara lain batch size, epoch, optimizer, activation function, dan dropout rate. Hasil yang didapatkan adalah model pengembangan dapat mengatasi masalah overfitting dari model sebelumnya dengan akurasi sebesar 72,24% dan skor BLEU sebesar 45,83% yang dilakukan pada data uji lainnya.


Author(s):  
Ren Qing-Dao-Er-Ji ◽  
Yila Su ◽  
Nier Wu

With the development of natural language processing and neural machine translation, the neural machine translation method of end-to-end (E2E) neural network model has gradually become the focus of research because of its high translation accuracy and strong semantics of translation. However, there are still problems such as limited vocabulary and low translation loyalty, etc. In this paper, the discriminant method and the Conditional Random Field (CRF) model were used to segment and label the stem and affixes of Mongolian in the preprocessing stage of Mongolian-Chinese bilingual corpus. Aiming at the low translation loyalty problem, a decoding model combining Convolution Neural Network (CNN) and Gated Recurrent Unit (GRU) was constructed. The target language decoding was performed by using the GRU. A global attention model was used to obtain the bilingual word alignment information in the process of bilingual word alignment processing. Finally, the quality of the translation was evaluated by Bilingual Evaluation Understudy (BLEU) values and Perplexity (PPL) values. The improved model yields a BLEU value of 25.13 and a PPL value of [Formula: see text]. The experimental results show that the E2E Mongolian-Chinese neural machine translation model was improved in terms of translation quality and semantic confusion compared with traditional statistical methods and machine translation models based on Recurrent Neural Networks (RNN).


2020 ◽  
Vol 69 ◽  
pp. 343-418
Author(s):  
Felix Stahlberg

The field of machine translation (MT), the automatic translation of written text from one natural language into another, has experienced a major paradigm shift in recent years. Statistical MT, which mainly relies on various count-based models and which used to dominate MT research for decades, has largely been superseded by neural machine translation (NMT), which tackles translation with a single neural network. In this work we will trace back the origins of modern NMT architectures to word and sentence embeddings and earlier examples of the encoder-decoder network family. We will conclude with a short survey of more recent trends in the field.


Sign in / Sign up

Export Citation Format

Share Document