scholarly journals FRIDZ: A Framework for Real-time Identification of Disaster Zones

2019 ◽  
Author(s):  
Abhisek Chowdhury

Social media feeds are rapidly emerging as a novel avenue for the contribution and dissemination of geographic information. Among which Twitter, a popular micro-blogging service, has recently gained tremendous attention for its real-time nature. For instance, during floods, people usually tweet which enable detection of flood events by observing the twitter feeds promptly. In this paper, we propose a framework to investigate the real-time interplay between catastrophic event and peo-ples’ reaction such as flood and tweets to identify disaster zones. We have demonstrated our approach using the tweets following a flood in the state of Bihar in India during year 2017 as a case study. We construct a classifier for semantic analysis of the tweets in order to classify them into flood and non-flood categories. Subsequently, we apply natural language processing methods to extract information on flood affected areas and use elevation maps to identify potential disaster zones.

2021 ◽  
pp. 016555152110077
Author(s):  
Sulong Zhou ◽  
Pengyu Kan ◽  
Qunying Huang ◽  
Janet Silbernagel

Natural disasters cause significant damage, casualties and economical losses. Twitter has been used to support prompt disaster response and management because people tend to communicate and spread information on public social media platforms during disaster events. To retrieve real-time situational awareness (SA) information from tweets, the most effective way to mine text is using natural language processing (NLP). Among the advanced NLP models, the supervised approach can classify tweets into different categories to gain insight and leverage useful SA information from social media data. However, high-performing supervised models require domain knowledge to specify categories and involve costly labelling tasks. This research proposes a guided latent Dirichlet allocation (LDA) workflow to investigate temporal latent topics from tweets during a recent disaster event, the 2020 Hurricane Laura. With integration of prior knowledge, a coherence model, LDA topics visualisation and validation from official reports, our guided approach reveals that most tweets contain several latent topics during the 10-day period of Hurricane Laura. This result indicates that state-of-the-art supervised models have not fully utilised tweet information because they only assign each tweet a single label. In contrast, our model can not only identify emerging topics during different disaster events but also provides multilabel references to the classification schema. In addition, our results can help to quickly identify and extract SA information to responders, stakeholders and the general public so that they can adopt timely responsive strategies and wisely allocate resource during Hurricane events.


2018 ◽  
Vol 15 (1) ◽  
pp. 56-62
Author(s):  
Aleksandra Laucuka

Abstract Despite the initial function of hashtags as tools for sorting and aggregating information according to topics, the social media currently witness a diversity of uses diverging from the initial purpose. The aim of this article is to investigate the communicative functions of hashtags through a combined approach of literature review, field study and case study. Different uses of hashtags were subjected to semantic analysis in order to disclose generalizable trends. As a result, ten communicative functions were identified: topic-marking, aggregation, socializing, excuse, irony, providing metadata, expressing attitudes, initiating movements, propaganda and brand marketing. These findings would help to better understand modern online discourse and to prove that hashtags are to be considered as a meaningful part of the message. A limitation of this study is its restricted volume.


2020 ◽  
Vol 9 (2) ◽  
pp. 136
Author(s):  
Tengfei Yang ◽  
Jibo Xie ◽  
Guoqing Li ◽  
Naixia Mou ◽  
Cuiju Chen ◽  
...  

The abnormal change in the global climate has increased the chance of urban rainstorm disasters, which greatly threatens people’s daily lives, especially public travel. Timely and effective disaster data sources and analysis methods are essential for disaster reduction. With the popularity of mobile devices and the development of network facilities, social media has attracted widespread attention as a new source of disaster data. The characteristics of rich disaster information, near real-time transmission channels, and low-cost data production have been favored by many researchers. These researchers have used different methods to study disaster reduction based on the different dimensions of information contained in social media, including time, location and content. However, current research is not sufficient and rarely combines specific road condition information with public emotional information to detect traffic impact areas and assess the spatiotemporal influence of these areas. Thus, in this paper, we used various methods, including natural language processing and deep learning, to extract the fine-grained road condition information and public emotional information contained in social media text to comprehensively detect and analyze traffic impact areas during a rainstorm disaster. Furthermore, we proposed a model to evaluate the spatiotemporal influence of these detected traffic impact areas. The heavy rainstorm event in Beijing, China, in 2018 was selected as a case study to verify the validity of the disaster reduction method proposed in this paper.


Author(s):  
Helen Clough ◽  
Karen Foley

The Open University (UK) Library supports its distance-learning students with interactive, real-time events on social media. In this chapter the authors take a case study approach and concentrate on the examples of Facebook and Livestream to illustrate how live engagement events on social media have helped to build communities of learners in spaces they already occupy, raise the visibility of the library's services and resources with staff and students, and foster collaboration with other departments, while also being effective mechanisms for instruction. The chapter concludes with the library's plans for the future and recommendations for other academic libraries wishing to run live engagement events on social media.


2021 ◽  
Vol 72 (2) ◽  
pp. 319-329
Author(s):  
Aleksei Dobrov ◽  
Maria Smirnova

Abstract This article presents the current results of an ongoing study of the possibilities of fine-tuning automatic morphosyntactic and semantic annotation by means of improving the underlying formal grammar and ontology on the example of one Tibetan text. The ultimate purpose of work at this stage was to improve linguistic software developed for natural-language processing and understanding in order to achieve complete annotation of a specific text and such state of the formal model, in which all linguistic phenomena observed in the text would be explained. This purpose includes the following tasks: analysis of error cases in annotation of the text from the corpus; eliminating these errors in automatic annotation; development of formal grammar and updating of dictionaries. Along with the morpho-syntactic analysis, the current approach involves simultaneous semantic analysis as well. The article describes semantic annotation of the corpus, required by grammar revision and development, which was made with the use of computer ontology. The work is carried out with one of the corpus texts – a grammatical poetic treatise Sum-cu-pa (VII c.).


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Jens A. de Bruijn ◽  
Hans de Moel ◽  
Brenden Jongman ◽  
Marleen C. de Ruiter ◽  
Jurjen Wagemaker ◽  
...  

AbstractEarly event detection and response can significantly reduce the societal impact of floods. Currently, early warning systems rely on gauges, radar data, models and informal local sources. However, the scope and reliability of these systems are limited. Recently, the use of social media for detecting disasters has shown promising results, especially for earthquakes. Here, we present a new database for detecting floods in real-time on a global scale using Twitter. The method was developed using 88 million tweets, from which we derived over 10,000 flood events (i.e., flooding occurring in a country or first order administrative subdivision) across 176 countries in 11 languages in just over four years. Using strict parameters, validation shows that approximately 90% of the events were correctly detected. In countries where the first official language is included, our algorithm detected 63% of events in NatCatSERVICE disaster database at admin 1 level. Moreover, a large number of flood events not included in NatCatSERVICE were detected. All results are publicly available on www.globalfloodmonitor.org.


Author(s):  
Deeptanshu Jha ◽  
Rahul Singh

Abstract Motivation Substance abuse and addiction is a significant contemporary health crisis. Modeling its epidemiology and designing effective interventions requires real-time data analysis along with the means to contextualize addiction patterns across the individual-to-community scale. In this context, social media platforms have begun to receive significant attention as a novel source of real-time user-reported information. However, the ability of epidemiologists to use such information is significantly stymied by the lack of publicly available algorithms and software for addiction information extraction, analysis and modeling. Results SMARTS is a public, open source, web-based application that addresses the aforementioned deficiency. SMARTS is designed to analyze data from two popular social media forums, namely, Reddit and Twitter and can be used to study the effect of various intoxicants including, opioids, weed, kratom, alcohol and cigarettes. The SMARTS software analyzes social media posts using natural language processing, and machine learning to characterize drug use at both the individual- and population-levels. Included in SMARTS is a predictive modeling functionality that can, with high accuracy, identify individuals open to addiction recovery interventions. SMARTS also supports extraction, analysis and visualization of a number of key informational and demographic characteristics including post topics and sentiment, drug- and recovery-term usage, geolocation and age. Finally, the distributions of the aforementioned characteristics as derived from a set of 170 097 drug users are provided as part of SMARTS and can be used by researchers as a reference. Availability and implementation The SMARTS web server and source code are available at: http://haddock9.sfsu.edu/. Contact [email protected] Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Erica Briscoe ◽  
Scott Appling ◽  
Edward Clarkson ◽  
Nikolay Lipskiy ◽  
James Tyson ◽  
...  

ObjectiveThe objective of this analysis is to leverage recent advances innatural language processing (NLP) to develop new methods andsystem capabilities for processing social media (Twitter messages)for situational awareness (SA), syndromic surveillance (SS), andevent-based surveillance (EBS). Specifically, we evaluated the useof human-in-the-loop semantic analysis to assist public health (PH)SA stakeholders in SS and EBS using massive amounts of publiclyavailable social media data.IntroductionSocial media messages are often short, informal, and ungrammatical.They frequently involve text, images, audio, or video, which makesthe identification of useful information difficult. This complexityreduces the efficacy of standard information extraction techniques1.However, recent advances in NLP, especially methods tailoredto social media2, have shown promise in improving real-time PHsurveillance and emergency response3. Surveillance data derived fromsemantic analysis combined with traditional surveillance processeshas potential to improve event detection and characterization. TheCDC Office of Public Health Preparedness and Response (OPHPR),Division of Emergency Operations (DEO) and the Georgia TechResearch Institute have collaborated on the advancement of PH SAthrough development of new approaches in using semantic analysisfor social media.MethodsTo understand how computational methods may benefit SS andEBS, we studied an iterative refinement process, in which the datauser actively cultivated text-based topics (“semantic culling”) in asemi-automated SS process. This ‘human-in-the-loop’ process wascritical for creating accurate and efficient extraction functions in large,dynamic volumes of data. The general process involved identifyinga set of expert-supplied keywords, which were used to collect aninitial set of social media messages. For purposes of this analysisresearchers applied topic modeling to categorize related messages intoclusters. Topic modeling uses statistical techniques to semanticallycluster and automatically determine salient aggregations. A user thensemantically culled messages according to their PH relevance.In June 2016, researchers collected 7,489 worldwide English-language Twitter messages (tweets) and compared three samplingmethods: a baseline random sample (C1, n=2700), a keyword-basedsample (C2, n=2689), and one gathered after semantically cullingC2 topics of irrelevant messages (C3, n=2100). Researchers utilizeda software tool, Luminoso Compass4, to sample and perform topicmodeling using its real-time modeling and Twitter integrationfeatures. For C2 and C3, researchers sampled tweets that theLuminoso service matched to both clinical and layman definitions ofRash, Gastro-Intestinal syndromes5, and Zika-like symptoms. Laymanterms were derived from clinical definitions from plain languagemedical thesauri. ANOVA statistics were calculated using SPSSsoftware, version. Post-hoc pairwise comparisons were completedusing ANOVA Turkey’s honest significant difference (HSD) test.ResultsAn ANOVA was conducted, finding the following mean relevancevalues: 3% (+/- 0.01%), 24% (+/- 6.6%) and 27% (+/- 9.4%)respectively for C1, C2, and C3. Post-hoc pairwise comparison testsshowed the percentages of discovered messages related to the eventtweets using C2 and C3 methods were significantly higher than forthe C1 method (random sampling) (p<0.05). This indicates that thehuman-in-the-loop approach provides benefits in filtering socialmedia data for SS and ESB; notably, this increase is on the basis ofa single iteration of semantic culling; subsequent iterations could beexpected to increase the benefits.ConclusionsThis work demonstrates the benefits of incorporating non-traditional data sources into SS and EBS. It was shown that an NLP-based extraction method in combination with human-in-the-loopsemantic analysis may enhance the potential value of social media(Twitter) for SS and EBS. It also supports the claim that advancedanalytical tools for processing non-traditional SA, SS, and EBSsources, including social media, have the potential to enhance diseasedetection, risk assessment, and decision support, by reducing the timeit takes to identify public health events.


Sign in / Sign up

Export Citation Format

Share Document