scholarly journals A TASK-DRIVEN DISASTER DATA LINK APPROACH

Author(s):  
L. Y. Qiu ◽  
Q. Zhu ◽  
J. Y. Gu ◽  
Z. Q. Du

With the rapid development of sensor networks and Earth observation technology, a large quantity of disaster-related data is available, such as remotely sensed data, historic data, cases data, simulation data, disaster products and so on. However, the efficiency of current data management and service systems has become increasingly serious due to the task variety and heterogeneous data. For emergency task-oriented applications, data searching mainly relies on artificial experience based on simple metadata index, whose high time-consuming and low accuracy cannot satisfy the requirements of disaster products on velocity and veracity. In this paper, a task-oriented linking method is proposed for efficient disaster data management and intelligent service, with the objectives of 1) putting forward ontologies of disaster task and data to unify the different semantics of multi-source information, 2) identifying the semantic mapping from emergency tasks to multiple sources on the basis of uniform description in 1), 3) linking task-related data automatically and calculating the degree of correlation between each data and a target task. The method breaks through traditional static management of disaster data and establishes a base for intelligent retrieval and active push of disaster information. The case study presented in this paper illustrates the use of the method with a flood emergency relief task.

Author(s):  
Q. Linyao ◽  
D. Zhiqiang ◽  
Z. Qing

With the rapid development of sensor networks and Earth observation technology, a large quantity of disaster-related data is available, such as remotely sensed data, historic data, case data, simulated data, and disaster products. However, the efficiency of current data management and service systems has become increasingly difficult due to the task variety and heterogeneous data. For emergency task-oriented applications, the data searches primarily rely on artificial experience based on simple metadata indices, the high time consumption and low accuracy of which cannot satisfy the speed and veracity requirements for disaster products. In this paper, a task-oriented correlation method is proposed for efficient disaster data management and intelligent service with the objectives of 1) putting forward disaster task ontology and data ontology to unify the different semantics of multi-source information, 2) identifying the semantic mapping from emergency tasks to multiple data sources on the basis of uniform description in 1), and 3) linking task-related data automatically and calculating the correlation between each data set and a certain task. The method goes beyond traditional static management of disaster data and establishes a basis for intelligent retrieval and active dissemination of disaster information. The case study presented in this paper illustrates the use of the method on an example flood emergency relief task.


Author(s):  
Katarina Grolinger ◽  
Emna Mezghani ◽  
Miriam A. M. Capretz ◽  
Ernesto Exposito

Decision-making in disaster management requires information gathering, sharing, and integration by means of collaboration on a global scale and across governments, industries, and communities. Large volume of heterogeneous data is available; however, current data management solutions offer few or no integration capabilities and limited potential for collaboration. Moreover, recent advances in NoSQL, cloud computing, and Big Data open the door for new solutions in disaster data management. This chapter presents a Knowledge as a Service (KaaS) framework for disaster cloud data management (Disaster-CDM), with the objectives of facilitating information gathering and sharing; storing large amounts of disaster-related data; and facilitating search and supporting interoperability and integration. In the Disaster-CDM approach NoSQL data stores provide storage reliability and scalability while service-oriented architecture achieves flexibility and extensibility. The contribution of Disaster-CDM is demonstrated by integration capabilities, on examples of full-text search and querying services.


Big Data ◽  
2016 ◽  
pp. 588-614 ◽  
Author(s):  
Katarina Grolinger ◽  
Emna Mezghani ◽  
Miriam A. M. Capretz ◽  
Ernesto Exposito

Decision-making in disaster management requires information gathering, sharing, and integration by means of collaboration on a global scale and across governments, industries, and communities. Large volume of heterogeneous data is available; however, current data management solutions offer few or no integration capabilities and limited potential for collaboration. Moreover, recent advances in NoSQL, cloud computing, and Big Data open the door for new solutions in disaster data management. This chapter presents a Knowledge as a Service (KaaS) framework for disaster cloud data management (Disaster-CDM), with the objectives of facilitating information gathering and sharing; storing large amounts of disaster-related data; and facilitating search and supporting interoperability and integration. In the Disaster-CDM approach NoSQL data stores provide storage reliability and scalability while service-oriented architecture achieves flexibility and extensibility. The contribution of Disaster-CDM is demonstrated by integration capabilities, on examples of full-text search and querying services.


2013 ◽  
Vol 462-463 ◽  
pp. 405-409 ◽  
Author(s):  
Yi Qi Shao ◽  
Ren Kui Liu ◽  
Fu Tian Wang ◽  
Ming Dian Chen

With the rapid development of high-speed railway, the equipment life-cycle management data are generated in large scales which run through the period of production, operation, maintenance, falling into a notion of Big Data. There is broad recognition of value of data and information obtained through analyzing it. The exponential growth in the amount of railway-related data means that revolutionary measures are needed for data management, analysis and accessibility. At present, the promise of data-driven decision-making is now being recognized broadly. How to store the big data efficiently, reliably and cheaply are important research topics. This paper proposes a framework of data management of high-speed railway equipment, where cloud computing provides a feasible technical solution combined with MapReduce programming model based on Hadoop platform. These models are capable of considering the characteristics of data and processing demand in management of High-speed railway equipment. Finally, we summarize the challenges and opportunities with Big Data for application of China railway and point out there is more than enough that we can work on.


Author(s):  
Nikifor Ostanin ◽  
Nikifor Ostanin

Coastal zone of the Eastern Gulf of Finland is subjected to essential natural and anthropogenic impact. The processes of abrasion and accumulation are predominant. While some coastal protection structures are old and ruined the problem of monitoring and coastal management is actual. Remotely sensed data is important component of geospatial information for coastal environment research. Rapid development of modern satellite remote sensing techniques and data processing algorithms made this data essential for monitoring and management. Multispectral imagers of modern high resolution satellites make it possible to produce advanced image processing, such as relative water depths estimation, sea-bottom classification and detection of changes in shallow water environment. In the framework of the project of development of new coast protection plan for the Kurortny District of St.-Petersburg a series of archival and modern satellite images were collected and analyzed. As a result several schemes of underwater parts of coastal zone and schemes of relative bathymetry for the key areas were produced. The comparative analysis of multi-temporal images allow us to reveal trends of environmental changes in the study areas. This information, compared with field observations, shows that remotely sensed data is useful and efficient for geospatial planning and development of new coast protection scheme.


2021 ◽  
Vol 13 (5) ◽  
pp. 124
Author(s):  
Jiseong Son ◽  
Chul-Su Lim ◽  
Hyoung-Seop Shim ◽  
Ji-Sun Kang

Despite the development of various technologies and systems using artificial intelligence (AI) to solve problems related to disasters, difficult challenges are still being encountered. Data are the foundation to solving diverse disaster problems using AI, big data analysis, and so on. Therefore, we must focus on these various data. Disaster data depend on the domain by disaster type and include heterogeneous data and lack interoperability. In particular, in the case of open data related to disasters, there are several issues, where the source and format of data are different because various data are collected by different organizations. Moreover, the vocabularies used for each domain are inconsistent. This study proposes a knowledge graph to resolve the heterogeneity among various disaster data and provide interoperability among domains. Among disaster domains, we describe the knowledge graph for flooding disasters using Korean open datasets and cross-domain knowledge graphs. Furthermore, the proposed knowledge graph is used to assist, solve, and manage disaster problems.


Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1448
Author(s):  
Xuan Liu ◽  
Jianbao Chen

Along with the rapid development of the geographic information system, high-dimensional spatial heterogeneous data has emerged bringing theoretical and computational challenges to statistical modeling and analysis. As a result, effective dimensionality reduction and spatial effect recognition has become very important. This paper focuses on variable selection in the spatial autoregressive model with autoregressive disturbances (SARAR) which contains a more comprehensive spatial effect. The variable selection procedure is presented by using the so-called penalized quasi-likelihood approach. Under suitable regular conditions, we obtain the rate of convergence and the asymptotic normality of the estimators. The theoretical results ensure that the proposed method can effectively identify spatial effects of dependent variables, find spatial heterogeneity in error terms, reduce the dimension, and estimate unknown parameters simultaneously. Based on step-by-step transformation, a feasible iterative algorithm is developed to realize spatial effect identification, variable selection, and parameter estimation. In the setting of finite samples, Monte Carlo studies and real data analysis demonstrate that the proposed penalized method performs well and is consistent with the theoretical results.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yusheng Lu ◽  
Jiantong Zhang

PurposeThe digital revolution and the use of big data (BD) in particular has important applications in the construction industry. In construction, massive amounts of heterogeneous data need to be analyzed to improve onsite efficiency. This article presents a systematic review and identifies future research directions, presenting valuable conclusions derived from rigorous bibliometric tools. The results of this study may provide guidelines for construction engineering and global policymaking to change the current low-efficiency of construction sites.Design/methodology/approachThis study identifies research trends from 1,253 peer-reviewed papers, using general statistics, keyword co-occurrence analysis, critical review, and qualitative-bibliometric techniques in two rounds of search.FindingsThe number of studies in this area rapidly increased from 2012 to 2020. A significant number of publications originated in the UK, China, the US, and Australia, and the smallest number from one of these countries is more than twice the largest number in the remaining countries. Keyword co-occurrence is divided into three clusters: BD application scenarios, emerging technology in BD, and BD management. Currently developing approaches in BD analytics include machine learning, data mining, and heuristic-optimization algorithms such as graph convolutional, recurrent neural networks and natural language processes (NLP). Studies have focused on safety management, energy reduction, and cost prediction. Blockchain integrated with BD is a promising means of managing construction contracts.Research limitations/implicationsThe study of BD is in a stage of rapid development, and this bibliometric analysis is only a part of the necessary practical analysis.Practical implicationsNational policies, temporal and spatial distribution, BD flow are interpreted, and the results of this may provide guidelines for policymakers. Overall, this work may develop the body of knowledge, producing a reference point and identifying future development.Originality/valueTo our knowledge, this is the first bibliometric review of BD in the construction industry. This study can also benefit construction practitioners by providing them a focused perspective of BD for emerging practices in the construction industry.


Author(s):  
Chamnan Kumsap ◽  
Somsarit Sinnung ◽  
Suriyawate Boonthalarath

"This article addresses the establishment of a mesh communication backbone to facilitate a near real-time and seamless communications channel for disaster data management at its proof of concept stage. A complete function of the data communications is aimed at the input in near real-time of texts, photos, live HD videos of the incident to originate the disaster data management of a military unit responsible for prevention and solving disaster problems and in need of a communication backbone that links data from a Response Unit to an Incident Command Station. The functions of data flow were tested in lab and at fields. Texts encompassing registered name, latitude, longitude, sent time were sent from concurrent 6 responders. Photos and full HD live videos were successfully sent to a laptop Incident Command Station. However, a disaster database management system was needed to store data sent by the Response Unit. Quantitative statistics were suggested for a more substantial proof of concept and subject to further studies."


Sign in / Sign up

Export Citation Format

Share Document