Crowdsourcing Incident Information for Emergency Response using Open Data Sources in Smart Cities

Author(s):  
Fan Zuo ◽  
Abdullah Kurkcu ◽  
Kaan Ozbay ◽  
Jingqin Gao

Emergency events affect human security and safety as well as the integrity of the local infrastructure. Emergency response officials are required to make decisions using limited information and time. During emergency events, people post updates to social media networks, such as tweets, containing information about their status, help requests, incident reports, and other useful information. In this research project, the Latent Dirichlet Allocation (LDA) model is used to automatically classify incident-related tweets and incident types using Twitter data. Unlike the previous social media information models proposed in the related literature, the LDA is an unsupervised learning model which can be utilized directly without prior knowledge and preparation for data in order to save time during emergencies. Twitter data including messages and geolocation information during two recent events in New York City, the Chelsea explosion and Hurricane Sandy, are used as two case studies to test the accuracy of the LDA model for extracting incident-related tweets and labeling them by incident type. Results showed that the model could extract emergency events and classify them for both small and large-scale events, and the model’s hyper-parameters can be shared in a similar language environment to save model training time. Furthermore, the list of keywords generated by the model can be used as prior knowledge for emergency event classification and training of supervised classification models such as support vector machine and recurrent neural network.

2020 ◽  
Author(s):  
Tasmiah Nuzhath ◽  
Samia Tasnim ◽  
Rahul Kumar Sanjwal ◽  
Nusrat Fahmida Trisha ◽  
Mariya Rahman ◽  
...  

Background: The coronavirus disease (COVID-19) pandemic has caused a significant burden of mortality and morbidity. A vaccine will be the most effective global preventive strategy to end the pandemic. Studies have maintained that exposure to negative sentiments related to vaccination on social media increase vaccine hesitancy and refusal. Despite the influence social media has on vaccination behavior, there is a lack of studies exploring the public's exposure to misinformation, conspiracy theories, and concerns on Twitter regarding a potential COVID-19 vaccination. Objective: The study aims to identify the major thematic areas about a potential COVID-19 vaccination based on the contents of Twitter data. Method: We retrieved 1,286,659 publicly available tweets posted within the timeline of July 19, 2020, to August 19, 2020, leveraging the Twint package. Following the extraction, we used Latent Dirichlet Allocation for topic modelling and identified 20 topics discussed in the tweets. We selected 4,868 tweets with the highest probability of belonging in the specific cluster and manually labeled as positive, negative, neutral, or irrelevant. The negative tweets were further assigned to a theme and subtheme based on the contentResult: The negative tweets were further categorized into 7 major themes: "safety and effectiveness,” "misinformation,” "conspiracy theories,” "mistrust of scientists and governments,” "lack of intent to get a COVID-19 vaccine,” "freedom of choice," and "religious beliefs. Negative tweets predominantly consisted of misleading statements (n=424) that immunization against coronavirus is unnecessary as the survival rate is high. The second most prevalent theme to emerge was tweets constituting safety and effectiveness related concerns (n=276) regarding the side effects of a potential vaccine developed at an unprecedented speed. Conclusion: Our findings suggest a need to formulate a large-scale vaccine communication plan that will address the safety concerns and debunk the misinformation and conspiracy theories spreading across social media platforms, increasing the public's acceptance of a COVID-19 vaccination.


2021 ◽  
Author(s):  
Myeong Gyu Kim ◽  
Jae Hyun Kim ◽  
Kyungim Kim

BACKGROUND Garlic-related misinformation is prevalent whenever a virus outbreak occurs. Again, with the outbreak of coronavirus disease 2019 (COVID-19), garlic-related misinformation is spreading through social media sites, including Twitter. Machine learning-based approaches can be used to detect misinformation from vast tweets. OBJECTIVE This study aimed to develop machine learning algorithms for detecting misinformation on garlic and COVID-19 in Twitter. METHODS This study used 5,929 original tweets mentioning garlic and COVID-19. Tweets were manually labeled as misinformation, accurate information, and others. We tested the following algorithms: k-nearest neighbors; random forest; support vector machine (SVM) with linear, radial, and polynomial kernels; and neural network. Features for machine learning included user-based features (verified account, user type, number of followers, and follower rate) and text-based features (uniform resource locator, negation, sentiment score, Latent Dirichlet Allocation topic probability, number of retweets, and number of favorites). A model with the highest accuracy in the training dataset (70% of overall dataset) was tested using a test dataset (30% of overall dataset). Predictive performance was measured using overall accuracy, sensitivity, specificity, and balanced accuracy. RESULTS SVM with the polynomial kernel model showed the highest accuracy of 0.670. The model also showed a balanced accuracy of 0.757, sensitivity of 0.819, and specificity of 0.696 for misinformation. Important features in the misinformation and accurate information classes included topic 4 (common myths), topic 13 (garlic-specific myths), number of followers, topic 11 (misinformation on social media), and follower rate. Topic 3 (cooking recipes) was the most important feature in the others class. CONCLUSIONS Our SVM model showed good performance in detecting misinformation. The results of our study will help detect misinformation related to garlic and COVID-19. It could also be applied to prevent misinformation related to dietary supplements in the event of a future outbreak of a disease other than COVID-19.


Shock Waves ◽  
2020 ◽  
Vol 30 (6) ◽  
pp. 671-675 ◽  
Author(s):  
S. E. Rigby ◽  
T. J. Lodge ◽  
S. Alotaibi ◽  
A. D. Barr ◽  
S. D. Clarke ◽  
...  

Abstract Rapid, accurate assessment of the yield of a large-scale urban explosion will assist in implementing emergency response plans, will facilitate better estimates of areas at risk of high damage and casualties, and will provide policy makers and the public with more accurate information about the event. On 4 August 2020, an explosion occurred in the Port of Beirut, Lebanon. Shortly afterwards, a number of videos were posted to social media showing the moment of detonation and propagation of the resulting blast wave. In this article, we present a method to rapidly calculate explosive yield based on analysis of 16 videos with a clear line-of-sight to the explosion. The time of arrival of the blast is estimated at 38 distinct positions, and the results are correlated with well-known empirical laws in order to estimate explosive yield. The best estimate and reasonable upper limit of the 2020 Beirut explosion determined from this method are 0.50 kt TNT and 1.12 kt TNT, respectively.


2020 ◽  
pp. 927-965
Author(s):  
Kimberly Young-McLear ◽  
Thomas A. Mazzuchi ◽  
Shahram Sarkani

This chapter provides readers with an overview of how social media has enhanced large-scale natural disaster response at the Department of Homeland Security and its partners. The authors of this chapter present the history of the Federal Emergency Management Agency and how its successes and failures have shaped how the Department of Homeland Security has managed trends in increased community participation and information technology. Concepts from Systems Engineering frame the discussion around resilience engineering, network analysis, information systems, and human systems integration as they pertain to how social media can be integrated more effectively in large-scale disaster response. Examples of social media in disaster response are presented including a more in-depth case study on the use of social media during the 2012 Hurricane Sandy response. The chapter concludes with a proposed framework of a decision support system which integrates the benefits of social media while mitigating its risks.


Author(s):  
Kimberly Young-McLear ◽  
Thomas A. Mazzuchi ◽  
Shahram Sarkani

This chapter provides readers with an overview of how social media has enhanced large-scale natural disaster response at the Department of Homeland Security and its partners. The authors of this chapter present the history of the Federal Emergency Management Agency and how its successes and failures have shaped how the Department of Homeland Security has managed trends in increased community participation and information technology. Concepts from Systems Engineering frame the discussion around resilience engineering, network analysis, information systems, and human systems integration as they pertain to how social media can be integrated more effectively in large-scale disaster response. Examples of social media in disaster response are presented including a more in-depth case study on the use of social media during the 2012 Hurricane Sandy response. The chapter concludes with a proposed framework of a decision support system which integrates the benefits of social media while mitigating its risks.


2016 ◽  
Vol 25 (4) ◽  
pp. 550-563 ◽  
Author(s):  
Karlene S. Tipler ◽  
Ruth A. Tarrant ◽  
David M. Johnston ◽  
Keith F. Tuffin

Purpose – The purpose of this paper is to identify lessons learned by schools from their involvement in the 2012 New Zealand ShakeOut nationwide earthquake drill. Design/methodology/approach – The results from a survey conducted with 514 schools were collated to identify the emergency preparedness lessons learned by schools through their participation in the ShakeOut exercise. Findings – Key findings indicated that: schools were likely to do more than the minimum when presented with a range of specific emergency preparedness activities; drills for emergency events require specific achievement objectives to be identified in order to be most effective in preparing schools; and large-scale initiatives, such as the ShakeOut exercise, encourage schools and students to engage in emergency preparedness activities. Practical implications – Based on the findings, six recommendations are made to assist schools to develop effective emergency response procedures. Originality/value – The present study contributes to the ongoing efforts of emergency management practitioners and academics to enhance the efficacy of school-based preparedness activities and to, ultimately, increase overall community resilience.


Author(s):  
S. Shen ◽  
T. Zhang ◽  
Y. Zhao ◽  
Z. Wang ◽  
F. Qian

Abstract. Benggang are characterized by deep-cut slopes with various shapes and depressions on the vast weathered crust slopes in southern China. The gully heads have been continuously collapsed and eroded to form a chair-like erosion landforms. It develops rapidly, and leads to large amounts of erosion, with the hazards of damaging land resources, destroying basic farmland, and deteriorating ecological environment. To study and manage Benggang, the primary task is to discover it. Traditional methods based on local in-situ investigations, which are not only labour-consuming but also inefficient. These methods are difficult to meet the needs of large-scale investigations of Benggang. This paper proposes a method for automatic Benggang recognition based on Ultra-High Resolution (UHR) DOM (Digital Orthophoto Map) and DSM (Digital Surface Model) obtained from UAV (Unmanned Aerial Vehicle) survey. This method adopts a Bag of Visual-Topographical Words (BoV-TW) model. The local features extracted from DOM and DSM are represented based on BoV-TW, and fused by Latent Dirichlet Allocation (LDA). Finally Support Vector Machine (SVM) is adopted as a supervised classifier to achieve high-precision automatic Benggang recognition. Experimental results prove that the total accuracy of our method can be maintained at about 95%, with recall and precision above 80% (the highest are 97.22% and 94.44%, respectively), which are significantly higher than the methods of using only DOM local features and using only BoV-TW.


Now a day’s human relations are maintained by social media networks. Traditional relationships now days are obsolete. To maintain in association, sharing ideas, exchange knowledge between we use social media networking sites. Social media networking sites like Twitter, Facebook, LinkedIn etc are available in the communication environment. Through Twitter media users share their opinions, interests, knowledge to others by messages. At the same time some of the user’s misguide the genuine users. These genuine users are also called solicited users and the users who misguidance are called spammers. These spammers post unwanted information to the non spam users. The non spammers may retweet them to others and they follow the spammers. To avoid this spam messages we propose a methodology by us using machine learning algorithms. To develop our approach used a set of content based features. In spam detection model we used Support vector machine algorithm(SVM) and Naive bayes classification algorithm. To measure the performance of our model we used precision, recall and F measure metrics.


2018 ◽  
Vol 46 (9) ◽  
pp. 1724-1740
Author(s):  
Travis R Meyer ◽  
Daniel Balagué ◽  
Miguel Camacho-Collados ◽  
Hao Li ◽  
Katie Khuu ◽  
...  

Gaining a complete picture of the activity in a city using vast data sources is challenging yet potentially very valuable. One such source of data is Twitter which generates millions of short spatio-temporally localized messages that, as a collection, have information on city regions and many forms of city activity. The quantity of data, however, necessitates summarization in a way that makes consumption by an observer efficient, accurate, and comprehensive. We present a two-step process for analyzing geotagged twitter data within a localized urban environment. The first step involves an efficient form of latent Dirichlet allocation, using an expectation maximization, for topic content summarization of the text information in the tweets. The second step involves spatial and temporal analysis of information within each topic using two complimentary metrics. These proposed metrics characterize the distributional properties of tweets in time and space for all topics. We integrate the second step into a graphical user interface that enables the user to adeptly navigate through the space of hundreds of topics. We present results of a case study of the city of Madrid, Spain, for the year 2011 in which both large-scale protests and elections occurred. Our data analysis methods identify these important events, as well as other classes of more mundane routine activity and their associated locations in Madrid.


Sign in / Sign up

Export Citation Format

Share Document