Multi‐dimensional long short‐term memory networks for artificial Arabic text recognition in news video

2018 ◽  
Vol 12 (5) ◽  
pp. 710-719 ◽  
Author(s):  
Oussama Zayene ◽  
Sameh Masmoudi Touj ◽  
Jean Hennebert ◽  
Rolf Ingold ◽  
Najoua Essoukri Ben Amara
SpringerPlus ◽  
2016 ◽  
Vol 5 (1) ◽  
Author(s):  
Saeeda Naz ◽  
Arif Iqbal Umar ◽  
Riaz Ahmed ◽  
Muhammad Imran Razzak ◽  
Sheikh Faisal Rashid ◽  
...  

2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Y.M. Wazery ◽  
Marwa E. Saleh ◽  
Abdullah Alharbi ◽  
Abdelmgeid A. Ali

Text summarization (TS) is considered one of the most difficult tasks in natural language processing (NLP). It is one of the most important challenges that stand against the modern computer system’s capabilities with all its new improvement. Many papers and research studies address this task in literature but are being carried out in extractive summarization, and few of them are being carried out in abstractive summarization, especially in the Arabic language due to its complexity. In this paper, an abstractive Arabic text summarization system is proposed, based on a sequence-to-sequence model. This model works through two components, encoder and decoder. Our aim is to develop the sequence-to-sequence model using several deep artificial neural networks to investigate which of them achieves the best performance. Different layers of Gated Recurrent Units (GRU), Long Short-Term Memory (LSTM), and Bidirectional Long Short-Term Memory (BiLSTM) have been used to develop the encoder and the decoder. In addition, the global attention mechanism has been used because it provides better results than the local attention mechanism. Furthermore, AraBERT preprocess has been applied in the data preprocessing stage that helps the model to understand the Arabic words and achieves state-of-the-art results. Moreover, a comparison between the skip-gram and the continuous bag of words (CBOW) word2Vec word embedding models has been made. We have built these models using the Keras library and run-on Google Colab Jupiter notebook to run seamlessly. Finally, the proposed system is evaluated through ROUGE-1, ROUGE-2, ROUGE-L, and BLEU evaluation metrics. The experimental results show that three layers of BiLSTM hidden states at the encoder achieve the best performance. In addition, our proposed system outperforms the other latest research studies. Also, the results show that abstractive summarization models that use the skip-gram word2Vec model outperform the models that use the CBOW word2Vec model.


Author(s):  
Zouhaira Noubigh ◽  
Anis Mezghani ◽  
Monji Kherallah

In recent years, Deep neural networks (DNNs) have achieved great success in sequence modeling. Several deep models have been used for enhancing Handwriting Text Recognition (HTR). Among these models, Convolutional Neural Networks (CNNs) and Recurrent Neural network especially Long-Short-Term-Memory (LSTM) networks achieve state-of-the-art recognition accuracy. The recognition methods for Arabic text lines have been widely applied in many specific tasks. However, there are still some potential challenges as the lack of available and large Arabic text recognition dataset and the characteristics of Arabic script. In order to address these challenges, we propose an end-to-end recognition method based on convolutional recurrent neural networks (CRNNs), which adds feature reuse network component on the basis of a CRNN. The model is trained and tested on two Arabic text recognition datasets named KHATT and AHTID/MW. The experimental results demonstrate that the proposed method achieves better performance than other methods in the literature.


2020 ◽  
Author(s):  
Abdolreza Nazemi ◽  
Johannes Jakubik ◽  
Andreas Geyer-Schulz ◽  
Frank J. Fabozzi

Sign in / Sign up

Export Citation Format

Share Document