Related Blogs’ Summarization With Natural Language Processing

2020 ◽  
Author(s):  
Niyati Baliyan ◽  
Aarti Sharma

Abstract There is plethora of information present on the web, on a given topic, in different forms i.e. blogs, articles, websites, etc. However, not all of the information is useful. Perusing and going through all of the information to get the understanding of the topic is a very tiresome and time-consuming task. Most of the time we end up investing in reading content that we later understand was not of importance to us. Due to the lack of capacity of the human to grasp vast quantities of information, relevant and crisp summaries are always desirable. Therefore, in this paper, we focus on generating a new blog entry containing the summary of multiple blogs on the same topic. Different approaches of clustering, modelling, content generation and summarization are applied to reach the intended goal. This system also eliminates the repetitive content giving savings on time and quantity, thereby making learning more comfortable and effective. Overall, a significant reduction in the number of words in the new blog generated by the system is observed by using the proposed novel methodology.

Designs ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 42
Author(s):  
Eric Lazarski ◽  
Mahmood Al-Khassaweneh ◽  
Cynthia Howard

In recent years, disinformation and “fake news” have been spreading throughout the internet at rates never seen before. This has created the need for fact-checking organizations, groups that seek out claims and comment on their veracity, to spawn worldwide to stem the tide of misinformation. However, even with the many human-powered fact-checking organizations that are currently in operation, disinformation continues to run rampant throughout the Web, and the existing organizations are unable to keep up. This paper discusses in detail recent advances in computer science to use natural language processing to automate fact checking. It follows the entire process of automated fact checking using natural language processing, from detecting claims to fact checking to outputting results. In summary, automated fact checking works well in some cases, though generalized fact checking still needs improvement prior to widespread use.


Author(s):  
Kiran Raj R

Today, everyone has a personal device to access the web. Every user tries to access the knowledge that they require through internet. Most of the knowledge is within the sort of a database. A user with limited knowledge of database will have difficulty in accessing the data in the database. Hence, there’s a requirement for a system that permits the users to access the knowledge within the database. The proposed method is to develop a system where the input be a natural language and receive an SQL query which is used to access the database and retrieve the information with ease. Tokenization, parts-of-speech tagging, lemmatization, parsing and mapping are the steps involved in the process. The project proposed would give a view of using of Natural Language Processing (NLP) and mapping the query in accordance with regular expression in English language to SQL.


2017 ◽  
Vol 1 (2) ◽  
pp. 89 ◽  
Author(s):  
Azam Orooji ◽  
Mostafa Langarizadeh

It is estimated that each year many people, most of whom are teenagers and young adults die by suicide worldwide. Suicide receives special attention with many countries developing national strategies for prevention. Since, more medical information is available in text, Preventing the growing trend of suicide in communities requires analyzing various textual resources, such as patient records, information on the web or questionnaires. For this purpose, this study systematically reviews recent studies related to the use of natural language processing techniques in the area of people’s health who have completed suicide or are at risk. After electronically searching for the PubMed and ScienceDirect databases and studying articles by two reviewers, 21 articles matched the inclusion criteria. This study revealed that, if a suitable data set is available, natural language processing techniques are well suited for various types of suicide related research.


2013 ◽  
Vol 19 (1) ◽  
pp. 67-88 ◽  
Author(s):  
Suyada Dansuwan ◽  
Kikuko Nishina ◽  
Kanji Akahori ◽  
Yasutaka Shimizu

Author(s):  
Steve Legrand ◽  
JRG Pulido

While HTML provides the Web with a standard format for information presentation, XML has been made a standard for information structuring on the Web. The mission of the Semantic Web now is to provide meaning to the Web. Apart from building on the existing Web technologies, we need other tools from other areas of science to do that. This chapter shows how natural language processing methods and technologies, together with ontologies and a neural algorithm, can be used to help in the task of adding meaning to the Web, thus making the Web a better platform for knowledge management in general.


2020 ◽  
Vol 26 (3) ◽  
pp. 103-107
Author(s):  
Ilie Cristian Dorobăţ ◽  
Vlad Posea

AbstractThe continuous expansion of the semantic web and of the linked open data cloud meant more semantic data are available for querying from endpoints all over the web. We propose extending a standard SPARQL interface with UI and Natural Language Processing features to allow easier and more intelligent querying. The paper describes some usage scenarios for easy querying and launches a discussion on the advantages of such an implementation.


2008 ◽  
Vol 23 (5) ◽  
pp. 16-17 ◽  
Author(s):  
Dragomir Radev ◽  
Mirella Lapata

2014 ◽  
Vol 687-691 ◽  
pp. 1149-1152
Author(s):  
Jing Peng ◽  
Hong Min Sun

The number of biomedical literatures is growing rapidly, and biomedical literature mining is becoming essential. An approach for article processing in text preprocessing is proposed in order to improve the performance of biomedical literature mining. This approach combines the Web and corpus counts in order to eliminate the limitations of noise data of the Web. We experimentally showed that the performance of the combination models is the best comparing to the pure Web and corpus models. We achieve the best precision of 89.1% on all article forms and 88.7% article loss class.


Sign in / Sign up

Export Citation Format

Share Document