System for monitoring natural disasters using natural language processing in the social network Twitter

Author(s):  
Miguel Maldonado ◽  
Darwin Alulema ◽  
Derlin Morocho ◽  
Marida Proano
Author(s):  
Sarojini Yarramsetti ◽  
Anvar Shathik J ◽  
Renisha. P.S.

In this digital world, experience sharing, knowledge exploration, taught posting and other related social exploitations are common to every individual as well as social media/network such as FaceBook, Twitter, etc plays a vital role in such kinds of activities. In general, many social network based sentimental feature extraction details and logics are available as well as many researchers work on that domain for last few years. But all those research specification are narrowed in the sense of building a way for estimating the opinions and sentiments with respect to the tweets and posts the user raised on the social network or any other related web interfacing medium. Many social network schemes provides an ability to the users to push the voice tweets and voice messages, so that the voice messages may contain some harmful as well as normal and important contents. In this paper, a new methodology is designed called Intensive Deep Learning based Voice Estimation Principle (IDLVEP), in which it is used to identify the voice message content and extract the features based on the Natural Language Processing (NLP) logic. The association of such Deep Learning and Natural Language Processing provides an efficient approach to build the powerful data processing model to identify the sentimental features from the social networking medium. This hybrid logic provides support for both text based and voice based tweet sentimental feature estimations. The Natural Language Processing principles assists the proposed approach of IDLVEP to extracts the voice content from the input message and provides a raw text content, based on that the deep learning principles classify the messages with respect to the estimation of harmful or normal tweets. The tweets raised by the user are initially sub-divided into two categories such as voice tweets and text tweets. The voice tweets will be taken care by the NLP principles and the text enabled tweets will be handled by means of deep learning principles, in which the voice tweets are also extracted and taken care by the deep learning principle only. The social network has two different faces such as provides support to developments as well as the same it provides a way to access that for harmful things. So, that this approach of IDLVEP identifies the harmful contents from the user tweets and remove that in an intelligent manner by using the proposed approach classification strategies. This paper concentrates on identifying the sentimental features from the user tweets and provides the harm free social network environment to the society.


Author(s):  
Uma Maheswari Sadasivam ◽  
Nitin Ganesan

Fake news is the word making more talk these days be it election, COVID 19 pandemic, or any social unrest. Many social websites have started to fact check the news or articles posted on their websites. The reason being these fake news creates confusion, chaos, misleading the community and society. In this cyber era, citizen journalism is happening more where citizens do the collection, reporting, dissemination, and analyse news or information. This means anyone can publish news on the social websites and lead to unreliable information from the readers' points of view as well. In order to make every nation or country safe place to live by holding a fair and square election, to stop spreading hatred on race, religion, caste, creed, also to have reliable information about COVID 19, and finally from any social unrest, we need to keep a tab on fake news. This chapter presents a way to detect fake news using deep learning technique and natural language processing.


2020 ◽  
Vol 12 (20) ◽  
pp. 8441
Author(s):  
Robert G. Boutilier ◽  
Kyle Bahr

Dealing with the social and political impacts of large complex projects requires monitoring and responding to concerns from an ever-evolving network of stakeholders. This paper describes the use of text analysis algorithms to identify stakeholders’ concerns across the project life cycle. The social license (SL) concept has been used to monitor the level of social acceptance of a project. That acceptance can be assessed from the texts produced by stakeholders on sources ranging from social media to personal interviews. The same texts also contain information on the substance of stakeholders’ concerns. Until recently, extracting that information necessitated manual coding by humans, which is a method that takes too long to be useful in time-sensitive projects. Using natural language processing algorithms, we designed a program that assesses the SL level and identifies stakeholders’ concerns in a few hours. To validate the program, we compared it to human coding of interview texts from a Bolivian mining project from 2009 to 2018. The program’s estimation of the annual average SL was significantly correlated with rating scale measures. The topics of concern identified by the program matched the most mentioned categories defined by human coders and identified the same temporal trends.


2014 ◽  
Vol 21 (1) ◽  
pp. 1-2
Author(s):  
Mitkov Ruslan

The Journal of Natural Language Engineering (JNLE) has enjoyed another very successful year. Two years after being accepted into Thomson Reuters Citation Index and being indexed in many of their products (including both the Science and the Social Science editions of the Journals Citation Rankings (JCR)), the journal further established itself as a leading forum for high-quality articles covering all aspects of Natural Language Processing research, including, but not limited to, the engineering of natural language methods and applications. I am delighted to report an increased number of submissions reaching a total of 92 between January–September 2014.


10.2196/21383 ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. e21383
Author(s):  
Vadim Osadchiy ◽  
Tommy Jiang ◽  
Jesse Nelson Mills ◽  
Sriram Venkata Eleswarapu

Background Despite the results of the Testosterone Trials, physicians remain uncomfortable treating men with hypogonadism. Discouraged, men increasingly turn to social media to discuss medical concerns. Objective The goal of the research was to apply natural language processing (NLP) techniques to social media posts for identification of themes of discussion regarding low testosterone and testosterone replacement therapy (TRT) in order to inform how physicians may better evaluate and counsel patients. Methods We retrospectively extracted posts from the Reddit community r/Testosterone from December 2015 through May 2019. We applied an NLP technique called the meaning extraction method with principal component analysis (MEM/PCA) to computationally derive discussion themes. We then performed a prospective analysis of Twitter data (tweets) that contained the terms low testosterone, low T, and testosterone replacement from June through September 2019. Results A total of 199,335 Reddit posts and 6659 tweets were analyzed. MEM/PCA revealed dominant themes of discussion: symptoms of hypogonadism, seeing a doctor, results of laboratory tests, derogatory comments and insults, TRT medications, and cardiovascular risk. More than 25% of Reddit posts contained the term doctor, and more than 5% urologist. Conclusions This study represents the first NLP evaluation of the social media landscape surrounding hypogonadism and TRT. Although physicians traditionally limit their practices to within their clinic walls, the ubiquity of social media demands that physicians understand what patients discuss online. Physicians may do well to bring up online discussions during clinic consultations for low testosterone to pull back the curtain and dispel myths.


2020 ◽  
Author(s):  
Vadim Osadchiy ◽  
Tommy Jiang ◽  
Jesse Nelson Mills ◽  
Sriram Venkata Eleswarapu

BACKGROUND Despite the results of the Testosterone Trials, physicians remain uncomfortable treating men with hypogonadism. Discouraged, men increasingly turn to social media to discuss medical concerns. OBJECTIVE The goal of the research was to apply natural language processing (NLP) techniques to social media posts for identification of themes of discussion regarding low testosterone and testosterone replacement therapy (TRT) in order to inform how physicians may better evaluate and counsel patients. METHODS We retrospectively extracted posts from the Reddit community r/Testosterone from December 2015 through May 2019. We applied an NLP technique called the meaning extraction method with principal component analysis (MEM/PCA) to computationally derive discussion themes. We then performed a prospective analysis of Twitter data (tweets) that contained the terms low testosterone, low T, and testosterone replacement from June through September 2019. RESULTS A total of 199,335 Reddit posts and 6659 tweets were analyzed. MEM/PCA revealed dominant themes of discussion: symptoms of hypogonadism, seeing a doctor, results of laboratory tests, derogatory comments and insults, TRT medications, and cardiovascular risk. More than 25% of Reddit posts contained the term doctor, and more than 5% urologist. CONCLUSIONS This study represents the first NLP evaluation of the social media landscape surrounding hypogonadism and TRT. Although physicians traditionally limit their practices to within their clinic walls, the ubiquity of social media demands that physicians understand what patients discuss online. Physicians may do well to bring up online discussions during clinic consultations for low testosterone to pull back the curtain and dispel myths.


2015 ◽  
Author(s):  
Αθανάσιος Παπαοικονόμου

Η παρούσα διατριβή προτείνει τεχνικές για την ανάλυση κοινωνικών δικτύων δίνοντας ιδιαίτερη έμφαση σε δίκτυα στα οποία οι χρήστες μπορούν να εκφράζουν εμπιστοσύνη ή δυσπιστία μεταξύ τους. Η ανάλυση τέτοιων γράφων εμπιστοσύνης είναι ένα ενδιαφέρον πρόβλημα με ευρύ φάσμα εφαρμογών όπως η ανάλυση γεωπολιτικών σχέσεων και η εύρεση κοινοτήτων χρηστών. Στα πρώτα τρία κεφάλαια εξετάζεται το πρόβλημα της πρόβλεψης της προδιάθεσης ενός χρήστη για έναν άλλο, αντλώντας τεχνικές από τρεις διαφορετικούς τομείς. Αρχικά, χρησιμοποιούνται κλασικές και διαδεδομένες τεχνικές από τον χώρο της Ανάλυσης Κοινωνικών Δικτύων (Social Network Analysis) με σκοπό να ερευνηθούν οι μηχανισμοί διάδοσης θετικών και αρνητικών απόψεων στο δίκτυο. Έπειτα, ενσωματώνουμε τεχνικές από τον τομέα της Βιοστατιστικής, ώστε να αναλύσουμε μεγάλα κοινωνικά δίκτυα από μικροσκοπική σκοπιά. Στη συνέχεια, με χρήση τεχνικών deep learning δείχνουμε πως είναι δυνατόν να "κατασκευαστεί" ένας γράφος εμπιστοσύνης αξιοποιώντας δεδομένα φαινομενικά άσχετα με αυτόν τον σκοπό, όπως οι κριτικές των χρηστών για διάφορα προϊόντα. Στο τελευταίο κεφάλαιο, παρουσιάζουμε έναν αλγόριθμο εύρεσης κοινοτήτων σε κοινωνικά δίκτυα, βασιζόμενοι σε πρόσφατες προόδους στον τομέα της Ανάλυσης Φυσικής Γλώσσας (Natural Language Processing). Η σειρά των κεφαλαίων αποτυπώνει την χρονική σειρά των πειραμάτων που εφάρμοσα αλλά κάθε κεφάλαιο είναι γραμμένο ώστε να μην έχει σημαντικές συσχετίσεις με τα προηγούμενα και έτσι να μπορεί να διαβαστεί αυτόνομα


2019 ◽  
Vol 43 (4) ◽  
pp. 676-690
Author(s):  
Zehra Taskin ◽  
Umut Al

Purpose With the recent developments in information technologies, natural language processing (NLP) practices have made tasks in many areas easier and more practical. Nowadays, especially when big data are used in most research, NLP provides fast and easy methods for processing these data. The purpose of this paper is to identify subfields of library and information science (LIS) where NLP can be used and to provide a guide based on bibliometrics and social network analyses for researchers who intend to study this subject. Design/methodology/approach Within the scope of this study, 6,607 publications, including NLP methods published in the field of LIS, are examined and visualized by social network analysis methods. Findings After evaluating the obtained results, the subject categories of publications, frequently used keywords in these publications and the relationships between these words are revealed. Finally, the core journals and articles are classified thematically for researchers working in the field of LIS and planning to apply NLP in their research. Originality/value The results of this paper draw a general framework for LIS field and guides researchers on new techniques that may be useful in the field.


Sign in / Sign up

Export Citation Format

Share Document