scholarly journals InundatEd: A Large-scale Flood Risk Modeling System on a Big-data – Discrete Global Grid System Framework

2020 ◽  
Author(s):  
Chiranjib Chaudhuri ◽  
Annie Gray ◽  
Colin Robertson

Abstract. Despite the high historical losses attributed to flood events, Canadian flood mitigation efforts have been hindered by a dearth of current, accessible flood extent/risk models and maps. Such resources often entail large datasets and high computational requirements. This study presents a novel, computationally efficient flood inundation modeling framework (InundatEd) using the height above the nearest drainage-based solution for Manning's equation, implemented in a big-data discrete global grid systems-based architecture with a web-GIS platform. Specifically, this study aimed to develop, present, and validate InundatEd through binary classification comparisons to known flood extents. The framework is divided into multiple swappable modules including GIS pre-processing; regional regression; inundation model; and web-GIS visualization. Extent testing and processing speed results indicate the value of a DGGS-based architecture alongside a simple conceptual inundation model and a dynamic user interface.

2021 ◽  
Vol 14 (6) ◽  
pp. 3295-3315
Author(s):  
Chiranjib Chaudhuri ◽  
Annie Gray ◽  
Colin Robertson

Abstract. Despite the high historical losses attributed to flood events, Canadian flood mitigation efforts have been hindered by a dearth of current, accessible flood extent/risk models and maps. Such resources often entail large datasets and high computational requirements. This study presents a novel, computationally efficient flood inundation modeling framework (“InundatEd”) using the height above nearest drainage (HAND)-based solution for Manning's equation, implemented in a big-data discrete global grid system (DGGS)-based architecture with a web-GIS (Geographic Information Systems) platform. Specifically, this study aimed to develop, present, and validate InundatEd through binary classification comparisons to recently observed flood events. The framework is divided into multiple swappable modules including GIS pre-processing; regional regression; inundation models; and web-GIS visualization. Extent testing and processing speed results indicate the value of a DGGS-based architecture alongside a simple conceptual inundation model and a dynamic user interface.


2020 ◽  
Vol 1 ◽  
pp. 1-23
Author(s):  
Majid Hojati ◽  
Colin Robertson

Abstract. With new forms of digital spatial data driving new applications for monitoring and understanding environmental change, there are growing demands on traditional GIS tools for spatial data storage, management and processing. Discrete Global Grid System (DGGS) are methods to tessellate globe into multiresolution grids, which represent a global spatial fabric capable of storing heterogeneous spatial data, and improved performance in data access, retrieval, and analysis. While DGGS-based GIS may hold potential for next-generation big data GIS platforms, few of studies have tried to implement them as a framework for operational spatial analysis. Cellular Automata (CA) is a classic dynamic modeling framework which has been used with traditional raster data model for various environmental modeling such as wildfire modeling, urban expansion modeling and so on. The main objectives of this paper are to (i) investigate the possibility of using DGGS for running dynamic spatial analysis, (ii) evaluate CA as a generic data model for dynamic phenomena modeling within a DGGS data model and (iii) evaluate an in-database approach for CA modelling. To do so, a case study into wildfire spread modelling is developed. Results demonstrate that using a DGGS data model not only provides the ability to integrate different data sources, but also provides a framework to do spatial analysis without using geometry-based analysis. This results in a simplified architecture and common spatial fabric to support development of a wide array of spatial algorithms. While considerable work remains to be done, CA modelling within a DGGS-based GIS is a robust and flexible modelling framework for big-data GIS analysis in an environmental monitoring context.


2020 ◽  
Vol 493 (4) ◽  
pp. 5972-5986 ◽  
Author(s):  
Sungryong Hong ◽  
Donghui Jeong ◽  
Ho Seong Hwang ◽  
Juhan Kim ◽  
Sungwook E Hong ◽  
...  

ABSTRACT By utilizing large-scale graph analytic tools implemented in the modern big data platform, apache spark, we investigate the topological structure of gravitational clustering in five different universes produced by cosmological N-body simulations with varying parameters: (1) a WMAP 5-yr compatible ΛCDM cosmology, (2) two different dark energy equation of state variants, and (3) two different cosmic matter density variants. For the big data calculations, we use a custom build of standalone Spark/Hadoop cluster at Korea Institute for Advanced Study and Dataproc Compute Engine in Google Cloud Platform with sample sizes ranging from 7 to 200 million. We find that among the many possible graph-topological measures, three simple ones: (1) the average of number of neighbours (the so-called average vertex degree) α, (2) closed-to-connected triple fraction (the so-called transitivity) $\tau _\Delta$, and (3) the cumulative number density ns ≥ 5 of subgraphs with connected component size s ≥ 5, can effectively discriminate among the five model universes. Since these graph-topological measures are directly related with the usual n-points correlation functions of the cosmic density field, graph-topological statistics powered by big data computational infrastructure opens a new, intuitive, and computationally efficient window into the dark Universe.


2019 ◽  
Vol 12 (1) ◽  
pp. 62 ◽  
Author(s):  
Xiaochuang Yao ◽  
Guoqing Li ◽  
Junshi Xia ◽  
Jin Ben ◽  
Qianqian Cao ◽  
...  

In the era of big data, the explosive growth of Earth observation data and the rapid advancement in cloud computing technology make the global-oriented spatiotemporal data simulation possible. These dual developments also provide advantageous conditions for discrete global grid systems (DGGS). DGGS are designed to portray real-world phenomena by providing a spatiotemporal unified framework on a standard discrete geospatial data structure and theoretical support to address the challenges from big data storage, processing, and analysis to visualization and data sharing. In this paper, the trinity of big Earth observation data (BEOD), cloud computing, and DGGS is proposed, and based on this trinity theory, we explore the opportunities and challenges to handle BEOD from two aspects, namely, information technology and unified data framework. Our focus is on how cloud computing and DGGS can provide an excellent solution to enable big Earth observation data. Firstly, we describe the current status and data characteristics of Earth observation data, which indicate the arrival of the era of big data in the Earth observation domain. Subsequently, we review the cloud computing technology and DGGS framework, especially the works and contributions made in the field of BEOD, including spatial cloud computing, mainstream big data platform, DGGS standards, data models, and applications. From the aforementioned views of the general introduction, the research opportunities and challenges are enumerated and discussed, including EO data management, data fusion, and grid encoding, which are concerned with analysis models and processing performance of big Earth observation data with discrete global grid systems in the cloud environment.


Author(s):  
B. Aparna ◽  
S. Madhavi ◽  
G. Mounika ◽  
P. Avinash ◽  
S. Chakravarthi

We propose a new design for large-scale multimedia content protection systems. Our design leverages cloud infrastructures to provide cost efficiency, rapid deployment, scalability, and elasticity to accommodate varying workloads. The proposed system can be used to protect different multimedia content types, including videos, images, audio clips, songs, and music clips. The system can be deployed on private and/or public clouds. Our system has two novel components: (i) method to create signatures of videos, and (ii) distributed matching engine for multimedia objects. The signature method creates robust and representative signatures of videos that capture the depth signals in these videos and it is computationally efficient to compute and compare as well as it requires small storage. The distributed matching engine achieves high scalability and it is designed to support different multimedia objects. We implemented the proposed system and deployed it on two clouds: Amazon cloud and our private cloud. Our experiments with more than 11,000 videos and 1 million images show the high accuracy and scalability of the proposed system. In addition, we compared our system to the protection system used by YouTube and our results show that the YouTube protection system fails to detect most copies of videos, while our system detects more than 98% of them.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Sign in / Sign up

Export Citation Format

Share Document