scholarly journals Text Summarization using Ml and Nlp

Quantity of data produced per day is around 2.5 quintillion bytes. Right now, no one has the time to pursue each and everything. With the growth of technology and digital media, people are becoming very lazy; they are looking for everything more smartly. If they want to read any article or newspaper, they cannot go through every line that has been given. To overcome this problem, an automatic text summarizer using Machine Learning (ML) and Natural Language Processing (NLP) with the python programming language has been introduced. This automatic text summarizer will generate a concise and meaningful summary of the text from resources like textbooks, articles, messages by using a text ranking algorithm. The input text that is given will be split into sentences; these sentences are again converted into vectors. These vectors are represented as a similarity matrix and based on these similarities; matrices sentence rankings will be given. The higher ranked sentences will be the final summary of the given input text.

2020 ◽  
Vol 34 (05) ◽  
pp. 7740-7747 ◽  
Author(s):  
Xiyan Fu ◽  
Jun Wang ◽  
Jinghan Zhang ◽  
Jinmao Wei ◽  
Zhenglu Yang

Automatic text summarization focuses on distilling summary information from texts. This research field has been considerably explored over the past decades because of its significant role in many natural language processing tasks; however, two challenging issues block its further development: (1) how to yield a summarization model embedding topic inference rather than extending with a pre-trained one and (2) how to merge the latent topics into diverse granularity levels. In this study, we propose a variational hierarchical model to holistically address both issues, dubbed VHTM. Different from the previous work assisted by a pre-trained single-grained topic model, VHTM is the first attempt to jointly accomplish summarization with topic inference via variational encoder-decoder and merge topics into multi-grained levels through topic embedding and attention. Comprehensive experiments validate the superior performance of VHTM compared with the baselines, accompanying with semantically consistent topics.


2019 ◽  
Vol 9 (21) ◽  
pp. 4701 ◽  
Author(s):  
Qicai Wang ◽  
Peiyu Liu ◽  
Zhenfang Zhu ◽  
Hongxia Yin ◽  
Qiuyue Zhang ◽  
...  

As a core task of natural language processing and information retrieval, automatic text summarization is widely applied in many fields. There are two existing methods for text summarization task at present: abstractive and extractive. On this basis we propose a novel hybrid model of extractive-abstractive to combine BERT (Bidirectional Encoder Representations from Transformers) word embedding with reinforcement learning. Firstly, we convert the human-written abstractive summaries to the ground truth labels. Secondly, we use BERT word embedding as text representation and pre-train two sub-models respectively. Finally, the extraction network and the abstraction network are bridged by reinforcement learning. To verify the performance of the model, we compare it with the current popular automatic text summary model on the CNN/Daily Mail dataset, and use the ROUGE (Recall-Oriented Understudy for Gisting Evaluation) metrics as the evaluation method. Extensive experimental results show that the accuracy of the model is improved obviously.


2021 ◽  
Vol 20 (Number 3) ◽  
pp. 329-352
Author(s):  
Suraya Alias ◽  
Mohd Shamrie Sainin ◽  
Siti Khaotijah Mohammad

In the Automatic Text Summarization domain, a Sentence Compression (SC) technique is applied to the summary sentence to remove unnecessary words or phrases. The purpose of SC is to preserve the important information in the sentence and to remove the unnecessary ones without sacrificing the sentence's grammar. The existing development of Malay Natural Language Processing (NLP) tools is still under study with limited open access. The issue is the lack of a benchmark dataset in the Malay language to evaluate the quality of the summaries and to validate the compressed sentence produced by the summarizer model. Hence, our paper outlines a Syntactic-based Sentence Validation technique for Malay sentences by referring to the Malay Grammar Pattern. In this work, we propose a new derivation set of Syntactic Rules based on the Malay main Word Class to validate a Malay sentence that undergoes the SC procedure. We experimented using the Malay dataset of 100 new articles covering the Natural Disaster and Events domain to find the optimal compression rate and its effect on the summary content. An automatic evaluation using ROUGE (Recall-Oriented Understudy for Gisting Evaluation) produced a result with an average F-measure of 0.5826 and an average Recall value of 0.5925 with an optimum compression rate of 0.5 Confidence Conf value. Furthermore, a manual summary evaluation by a group of Malay experts on the grammaticality of the compressed summary sentence produced a good result of 4.11 and a readability score of 4.12 out of 5. This depicts the reliability of the proposed technique to validate the Malay sentence with promising summary content and readability results.


2020 ◽  
Vol 9 (2) ◽  
pp. 342
Author(s):  
Amal Alkhudari

Due to the wide spread information and the diversity of its sources, there is a need to produce an accurate text summary with the least time and effort. This summary must  preserve key information content and overall meaning of the original text. Text summarization is one of the most important applications of Natural Language Processing (NLP). The goal of automatic text summarization is to create summaries that are similar to human-created ones. However, in many cases, the readability of created summaries is not satisfactory,   because the summaries do not consider the meaning of the words and do not cover all the semantically relevant aspects of data. In this paper we use syntactic and semantic analysis to propose an automatic system of Arabic texts summarization. This system is capable of understanding the meaning of information and retrieves only the relevant part. The effectiveness and evaluation of the proposed work are demonstrated under EASC corpus using Rouge measure. The generated summaries will be compared against those done by human and precedent researches.  


2020 ◽  
Vol 17 (9) ◽  
pp. 4368-4374
Author(s):  
Perpetua F. Noronha ◽  
Madhu Bhan

Digital data in huge amount is being persistently generated at an unparalleled and exponential rate. In this digital era where internet stands the prime source for generating incredible information, it is vital to develop better means to mine the available information rapidly and most capably. Manual extraction of the salient information from the large input text documents is a time consuming and inefficient task. In this fast-moving world, it is difficult to read all the text-content and derive insights from it. Automatic methods are required. The task of probing for relevant documents from the large number of sources available, and consuming apt information from it is a challenging task and is need of the hour. Automatic text summarization technique can be used to generate relevant and quality information in less time. Text Summarization is used to condense the source text into a brief summary maintaining its salient information and readability. Generating summaries automatically is in great demand to attend to the growing and increasing amount of text data that is obtainable online in order to mark out the significant information and to consume it faster. Text summarization is becoming extremely popular with the advancement in Natural Language Processing (NLP) and deep learning methods. The most important gain of automatic text summarization is, it reduces the analysis time. In this paper we focus on key approaches to automatic text summarization and also about their efficiency and limitations.


Author(s):  
Hui Lin ◽  
Vincent Ng

The focus of automatic text summarization research has exhibited a gradual shift from extractive methods to abstractive methods in recent years, owing in part to advances in neural methods. Originally developed for machine translation, neural methods provide a viable framework for obtaining an abstract representation of the meaning of an input text and generating informative, fluent, and human-like summaries. This paper surveys existing approaches to abstractive summarization, focusing on the recently developed neural approaches.


2022 ◽  
Vol 15 (1) ◽  
pp. 1-18
Author(s):  
Krishnaveni P. ◽  
Balasundaram S. R.

The day-to-day growth of online information necessitates intensive research in automatic text summarization (ATS). The ATS software produces summary text by extracting important information from the original text. With the help of summaries, users can easily read and understand the documents of interest. Most of the approaches for ATS used only local properties of text. Moreover, the numerous properties make the sentence selection difficult and complicated. So this article uses a graph based summarization to utilize structural and global properties of text. It introduces maximal clique based sentence selection (MCBSS) algorithm to select important and non-redundant sentences that cover all concepts of the input text for summary. The MCBSS algorithm finds novel information using maximal cliques (MCs). The experimental results of recall oriented understudy for gisting evaluation (ROUGE) on Timeline dataset show that the proposed work outperforms the existing graph algorithms Bushy Path (BP), Aggregate Similarity (AS), and TextRank (TR).


Author(s):  
Hans Christian ◽  
Mikhael Pramodana Agus ◽  
Derwin Suhartono

The increasing availability of online information has triggered an intensive research in the area of automatic text summarization within the Natural Language Processing (NLP). Text summarization reduces the text by removing the less useful information which helps the reader to find the required information quickly. There are many kinds of algorithms that can be used to summarize the text. One of them is TF-IDF (TermFrequency-Inverse Document Frequency). This research aimed to produce an automatic text summarizer implemented with TF-IDF algorithm and to compare it with other various online source of automatic text summarizer. To evaluate the summary produced from each summarizer, The F-Measure as the standard comparison value had been used. The result of this research produces 67% of accuracy with three data samples which are higher compared to the other online summarizers.


Sign in / Sign up

Export Citation Format

Share Document