Text Summarization Using Natural Language Processing

Author(s):  
G. Sreenivasulu ◽  
N. Thulasi Chitra ◽  
B. Sujatha ◽  
K. Venu Madhav
Author(s):  
Janjanam Prabhudas ◽  
C. H. Pradeep Reddy

The enormous increase of information along with the computational abilities of machines created innovative applications in natural language processing by invoking machine learning models. This chapter will project the trends of natural language processing by employing machine learning and its models in the context of text summarization. This chapter is organized to make the researcher understand technical perspectives regarding feature representation and their models to consider before applying on language-oriented tasks. Further, the present chapter revises the details of primary models of deep learning, its applications, and performance in the context of language processing. The primary focus of this chapter is to illustrate the technical research findings and gaps of text summarization based on deep learning along with state-of-the-art deep learning models for TS.


Author(s):  
Pankaj Kailas Bhole ◽  
A. J. Agrawal

Text  summarization is  an  old challenge  in  text  mining  but  in  dire  need  of researcher’s attention in the areas of computational intelligence, machine learning  and  natural  language  processing. We extract a set of features from each sentence that helps identify its importance in the document. Every time reading full text is time consuming. Clustering approach is useful to decide which type of data present in document. In this paper we introduce the concept of k-mean clustering for natural language processing of text for word matching and in order to extract meaningful information from large set of offline documents, data mining document clustering algorithm are adopted.


2018 ◽  
Vol 7 (4.5) ◽  
pp. 728
Author(s):  
Rasmita Rautray ◽  
Lopamudra Swain ◽  
Rasmita Dash ◽  
Rajashree Dash

In present scenario, text summarization is a popular and active field of research in both the Information Retrieval (IR) and Natural Language Processing (NLP) communities. Summarization is important for IR since it is a means to identify useful information by condensing the document from large corpus of data in an efficient way. In this study, different aspects of text summarization methods with strength, limitation and gap within the methods are presented.   


2020 ◽  
Vol 4 (1) ◽  
pp. 18-43
Author(s):  
Liuqing Li ◽  
Jack Geissinger ◽  
William A. Ingram ◽  
Edward A. Fox

AbstractNatural language processing (NLP) covers a large number of topics and tasks related to data and information management, leading to a complex and challenging teaching process. Meanwhile, problem-based learning is a teaching technique specifically designed to motivate students to learn efficiently, work collaboratively, and communicate effectively. With this aim, we developed a problem-based learning course for both undergraduate and graduate students to teach NLP. We provided student teams with big data sets, basic guidelines, cloud computing resources, and other aids to help different teams in summarizing two types of big collections: Web pages related to events, and electronic theses and dissertations (ETDs). Student teams then deployed different libraries, tools, methods, and algorithms to solve the task of big data text summarization. Summarization is an ideal problem to address learning NLP since it involves all levels of linguistics, as well as many of the tools and techniques used by NLP practitioners. The evaluation results showed that all teams generated coherent and readable summaries. Many summaries were of high quality and accurately described their corresponding events or ETD chapters, and the teams produced them along with NLP pipelines in a single semester. Further, both undergraduate and graduate students gave statistically significant positive feedback, relative to other courses in the Department of Computer Science. Accordingly, we encourage educators in the data and information management field to use our approach or similar methods in their teaching and hope that other researchers will also use our data sets and synergistic solutions to approach the new and challenging tasks we addressed.


2019 ◽  
Vol 7 ◽  
pp. 581-596
Author(s):  
Yumo Xu ◽  
Mirella Lapata

In this paper we introduce domain detection as a new natural language processing task. We argue that the ability to detect textual segments that are domain-heavy (i.e., sentences or phrases that are representative of and provide evidence for a given domain) could enhance the robustness and portability of various text classification applications. We propose an encoder-detector framework for domain detection and bootstrap classifiers with multiple instance learning. The model is hierarchically organized and suited to multilabel classification. We demonstrate that despite learning with minimal supervision, our model can be applied to text spans of different granularities, languages, and genres. We also showcase the potential of domain detection for text summarization.


Text Summarization is one of those utilizations of Natural Language Processing (NLP) which will undoubtedly hugy affect our lives. For the most part, Text outline can comprehensively be partitioned into two classifications, Extractive Summarization and Abstractive Summarization and the execution of seq2seq model for rundown of literary information utilizing of tensor stream/keras and showed on amazon or social reaction surveys, issues and news stories. Content rundown is a subdomain of Natural Language Processing that manages removing synopses from tremendous lumps of writings. There are two fundamental sorts of methods utilized for content rundown: NLP-based procedures and profound learning based strategies. Along these lines, our point is to look at spacy, gensim and nltk synopsis system by the info prerequisites. It will see a basic NLP-based system for content rundown. Or maybe it will basically utilize Python's NLTK library for content abridging.


2020 ◽  
Vol 7 (1) ◽  
pp. 54-60
Author(s):  
Falia Amalia ◽  
Moch Arif Bijaksana

Abstract — The Qur'an is one of the research in linguistic branches that have not been studied by many experts in their field so it has not gotten a popular place. Whereas in the Qur'an, very many words can be used to be researched especially in terms of Natural Language Processing such as text classification, document clustering, text summarization, etc. One of them is like the semantic similarity and the Distribution Semantic Model. The purpose of this writing is to try to create an evaluation dataset in the model of semantic distribution in Bahasa Indonesia with two classes of words that are noun and verb, looking for equal value and linkage of 500 word-pairs provided. Hopefully by looking at this, the semantic sciences that exist for the study of the Qur'an are growing, especially in the translation of the Quran in the Indonesia Language. This research was created at the same time to create datasets such as previously conducted research, in order to hope that future research with the focus of other discussions can use this dataset to help with the research. The study uses 6236 number of verses and from the number of such verses, the system gets 2193 for nouns and 1733 for verbs. The amount is processed using the Sim-rail vector method, a questionnaire against 15 respondents and gold standard, to get the performance value measured using Spearman Rank and get a correlation result of 0.909. Keywords — Natural Language Processing; Distribution Semantic Model; Sim-Rel Vector; Spearman Rank


Sign in / Sign up

Export Citation Format

Share Document