scholarly journals Text Summarization on Telugu e-News based on Long-Short Term Memory with Rectified Adam Optimizer

2022 ◽  
Vol 11 (1) ◽  
pp. 355-368
Author(s):  
Kishore Kumar Mamidala ◽  
Suresh Kumar Sanampudi
Symmetry ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1290 ◽  
Author(s):  
Rahman ◽  
Siddiqui

Abstractive text summarization that generates a summary by paraphrasing a long text remains an open significant problem for natural language processing. In this paper, we present an abstractive text summarization model, multi-layered attentional peephole convolutional LSTM (long short-term memory) (MAPCoL) that automatically generates a summary from a long text. We optimize parameters of MAPCoL using central composite design (CCD) in combination with the response surface methodology (RSM), which gives the highest accuracy in terms of summary generation. We record the accuracy of our model (MAPCoL) on a CNN/DailyMail dataset. We perform a comparative analysis of the accuracy of MAPCoL with that of the state-of-the-art models in different experimental settings. The MAPCoL also outperforms the traditional LSTM-based models in respect of semantic coherence in the output summary.


2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Y.M. Wazery ◽  
Marwa E. Saleh ◽  
Abdullah Alharbi ◽  
Abdelmgeid A. Ali

Text summarization (TS) is considered one of the most difficult tasks in natural language processing (NLP). It is one of the most important challenges that stand against the modern computer system’s capabilities with all its new improvement. Many papers and research studies address this task in literature but are being carried out in extractive summarization, and few of them are being carried out in abstractive summarization, especially in the Arabic language due to its complexity. In this paper, an abstractive Arabic text summarization system is proposed, based on a sequence-to-sequence model. This model works through two components, encoder and decoder. Our aim is to develop the sequence-to-sequence model using several deep artificial neural networks to investigate which of them achieves the best performance. Different layers of Gated Recurrent Units (GRU), Long Short-Term Memory (LSTM), and Bidirectional Long Short-Term Memory (BiLSTM) have been used to develop the encoder and the decoder. In addition, the global attention mechanism has been used because it provides better results than the local attention mechanism. Furthermore, AraBERT preprocess has been applied in the data preprocessing stage that helps the model to understand the Arabic words and achieves state-of-the-art results. Moreover, a comparison between the skip-gram and the continuous bag of words (CBOW) word2Vec word embedding models has been made. We have built these models using the Keras library and run-on Google Colab Jupiter notebook to run seamlessly. Finally, the proposed system is evaluated through ROUGE-1, ROUGE-2, ROUGE-L, and BLEU evaluation metrics. The experimental results show that three layers of BiLSTM hidden states at the encoder achieve the best performance. In addition, our proposed system outperforms the other latest research studies. Also, the results show that abstractive summarization models that use the skip-gram word2Vec model outperform the models that use the CBOW word2Vec model.


2019 ◽  
Vol 6 (4) ◽  
pp. 377
Author(s):  
Kasyfi Ivanedra ◽  
Metty Mustikasari

<p>Text Summarization atau peringkas text merupakan salah satu penerapan Artificial Intelligence (AI) dimana komputer dapat meringkas text pada suatu kalimat atau artikel menjadi lebih sederhana dengan tujuan untuk mempermudah manusia dalam mengambil kesimpulan dari artikel yang panjang tanpa harus membaca secara keseluruhan. Peringkasan teks secara otomatis dengan menggunakan teknik Abstraktif memiliki kemampuan meringkas teks lebih natural sebagaimana manusia meringkas dibandingkan dengan teknik ekstraktif yang hanya menyusun kalimat berdasarkan frekuensi kemunculan kata. Untuk dapat menghasilkan sistem peringkas teks dengan metode abstraktif, membutuhkan metode Recurrent Neural Network (RNN) yang memiliki sistematika perhitungan bobot secara berulang. RNN merupakan bagian dari Deep Learning dimana nilai akurasi yang dihasilkan dapat lebih baik dibandingkan dengan jaringan saraf tiruan sederhana karena bobot yang dihitung akan lebih akurat mendekati persamaan setiap kata. Jenis RNN yang digunakan adalah LSTM (Long Short Term Memory) untuk menutupi kekurangan pada RNN yang tidak dapat menyimpan memori untuk dipilah dan menambahkan mekanisme Attention agar setiap kata dapat lebih fokus pada konteks. Penelitian ini menguji performa sistem menggunakan Precision, Recall, dan F-Measure dengan membandingan hasil ringkasan yang dihasilkan oleh sistem dan ringkasan yang dibuat oleh manusia. Dataset yang digunakan adalah data artikel berita dengan jumlah total artikel sebanyak 4515 buah artikel. Pengujian dibagi berdasarkan data dengan menggunakan Stemming dan dengan teknik Non-stemming. Nilai rata-rata recall artikel berita non-stemming adalah sebesar 41%, precision sebesar 81%, dan F-measure sebesar 54,27%. Sedangkan nilai rata-rata recall artikel berita dengan teknik stemming sebesar 44%, precision sebesar 88%, dan F-measure sebesar 58,20 %.</p><p><em><strong>Abstract</strong></em></p><p class="Judul2"><em>Text Summarization is the application of Artificial Intelligence (AI) where the computer can summarize text of article to make it easier for humans to draw conclusions from long articles without having to read entirely. Abstractive techniques has ability to summarize the text more naturally as humans summarize. The summary results from abstractive techinques are more in context when compared to extractive techniques which only arrange sentences based on the frequency of occurrence of the word. To be able to produce a text summarization system with an abstractive techniques, it is required Deep Learning by using the Recurrent Neural Network (RNN) rather than simple Artificial Neural Network (ANN) method which has a systematic calculation of weight repeatedly in order to improve accuracy. The type of RNN used is LSTM (Long Short Term Memory) to cover the shortcomings of the RNN which cannot store memory to be sorted and add an Attention mechanism so that each word can focus more on the context.This study examines the performance of Precision, Recall, and F-Measure from the comparison of the summary results produced by the system and summaries made by humans. The dataset used is news article data with 4515 articles. Testing was divided based on data using Stemming and Non-stemming techniques.</em> <em>The average recall value of non-stemming news articles is 41%, precision is 81%, and F-measure is 54.27%. While the average value of recall of news articles with stemming technique is 44%, precision is 88%, and F-measure is 58.20%.</em></p><p><em><strong><br /></strong></em></p>


2020 ◽  
Author(s):  
Abdolreza Nazemi ◽  
Johannes Jakubik ◽  
Andreas Geyer-Schulz ◽  
Frank J. Fabozzi

Sign in / Sign up

Export Citation Format

Share Document