scholarly journals A deep learning technique-based data-driven model for accurate and rapid flood prediction

2022 ◽  
Author(s):  
Qianqian Zhou ◽  
Shuai Teng ◽  
Xiaoting Liao ◽  
Zuxiang Situ ◽  
Junman Feng ◽  
...  

Abstract. An accurate and rapid urban flood prediction model is essential to support decision-making on flood management, especially under increasing extreme precipitation conditions driven by climate change and urbanization. This study developed a deep learning technique-based data-driven flood prediction model based on an integration of LSTM network and Bayesian optimization. A case study in north China was applied to test the model performance and the results clearly showed that the model can accurately predict flood maps for various hyetograph inputs, meanwhile with substantial improvements in computation time. The model predicted flood maps 19,585 times faster than the physical-based hydrodynamic model and achieved a mean relative error of 9.5 %. For retrieving the spatial patterns of water depths, the degree of similarity of the flood maps was very high. In a best case, the difference between the ground truth and model prediction was only 0.76 % and the spatial distributions of inundated paths and areas were almost identical. The proposed model showed a robust generalizability and high computational efficiency, and can potentially replace and/or complement the conventional hydrodynamic model for urban flood assessment and management, particularly in applications of real time control, optimization and emergency design and plan.

2021 ◽  
Author(s):  
Chris Onof ◽  
Yuting Chen ◽  
Li-Pen Wang ◽  
Amy Jones ◽  
Susana Ochoa Rodriguez

<p>In this work a two-stage (rainfall nowcasting + flood prediction) analogue model for real-time urban flood forecasting is presented. The proposed approach accounts for the complexities of urban rainfall nowcasting while avoiding the expensive computational requirements of real-time urban flood forecasting.</p><p>The model has two consecutive stages:</p><ul><li><strong>(1) Rainfall nowcasting: </strong>0-6h lead time ensemble rainfall nowcasting is achieved by means of an analogue method, based on the assumption that similar climate condition will define similar patterns of temporal evolution of the rainfall. The framework uses the NORA analogue-based forecasting tool (Panziera et al., 2011), consisting of two layers. In the <strong>first layer, </strong>the 120 historical atmospheric (forcing) conditions most similar to the current atmospheric conditions are extracted, with the historical database consisting of ERA5 reanalysis data from the ECMWF and the current conditions derived from the US Global Forecasting System (GFS). In the <strong>second layer</strong>, twelve historical radar images most similar to the current one are extracted from amongst the historical radar images linked to the aforementioned 120 forcing analogues. Lastly, for each of the twelve analogues, the rainfall fields (at resolution of 1km/5min) observed after the present time are taken as one ensemble member. Note that principal component analysis (PCA) and uncorrelated multilinear PCA methods were tested for image feature extraction prior to applying the nearest neighbour technique for analogue selection.</li> <li><strong>(2) Flood prediction: </strong>we predict flood extent using the high-resolution rainfall forecast from Stage 1, along with a database of pre-run flood maps at 1x1 km<sup>2</sup> solution from 157 catalogued historical flood events. A deterministic flood prediction is obtained by using the averaged response from the twelve flood maps associated to the twelve ensemble rainfall nowcasts, where for each gridded area the median value is adopted (assuming flood maps are equiprobabilistic). A probabilistic flood prediction is obtained by generating a quantile-based flood map. Note that the flood maps were generated through rolling ball-based mapping of the flood volumes predicted at each node of the InfoWorks ICM sewer model of the pilot area.</li> </ul><p>The Minworth catchment in the UK (~400 km<sup>2</sup>) was used to demonstrate the proposed model. Cross‑assessment was undertaken for each of 157 flooding events by leaving one event out from training in each iteration and using it for evaluation. With a focus on the spatial replication of flood/non-flood patterns, the predicted flood maps were converted to binary (flood/non-flood) maps. Quantitative assessment was undertaken by means of a contingency table. An average accuracy rate (i.e. proportion of correct predictions, out of all test events) of 71.4% was achieved, with individual accuracy rates ranging from 57.1% to 78.6%). Further testing is needed to confirm initial findings and flood mapping refinement will be pursued.</p><p>The proposed model is fast, easy and relatively inexpensive to operate, making it suitable for direct use by local authorities who often lack the expertise on and/or capabilities for flood modelling and forecasting.</p><p><strong>References: </strong>Panziera et al. 2011. NORA–Nowcasting of Orographic Rainfall by means of Analogues. Quarterly Journal of the Royal Meteorological Society. 137, 2106-2123.</p>


2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Khaled Akkad

Remaining useful life (RUL) estimation is one of the most important aspects of prognostics and health management (PHM). Various deep learning (DL) based techniques have been developed and applied for the purposes of RUL estimation. One limitation of DL is the lack of physical interpretations as they are purely data driven models. Another limitation is the need for an exceedingly large amount of data to arrive at an acceptable pattern recognition performance for the purposes of RUL estimation. This research is aimed to overcome these limitations by developing physics based DL techniques for RUL prediction and validate the method with real run-to-failure datasets. The contribution of the research relies on creating hybrid DL based techniques as well as combining physics based approaches with DL techniques for effective RUL prediction.


2020 ◽  
Author(s):  
Ryosuke Kojima ◽  
Shoichi Ishida ◽  
Masateru Ohta ◽  
Hiroaki Iwata ◽  
Teruki Honma ◽  
...  

<div>Deep learning is developing as an important technology to perform various tasks in cheminformatics. In particular, graph convolutional neural networks (GCNs) have been reported to perform well in many types of prediction tasks related to molecules. Although GCN exhibits considerable potential in various applications, appropriate utilization of this resource for obtaining reasonable and reliable prediction results requires thorough understanding of GCN and programming. To leverage the power of GCN to benefit various users from chemists to cheminformaticians, an open-source GCN tool, kGCN, is introduced. To support the users with various levels of programming skills, kGCN includes three interfaces: a graphical user interface (GUI) employing KNIME for users with limited programming skills such as chemists, as well as command-line and Python library interfaces for users with advanced programming skills such as cheminformaticians. To support the three steps required for building a prediction model, i.e., pre-processing, model tuning, and interpretation of results, kGCN includes functions of typical pre-processing, Bayesian optimization for automatic model tuning, and visualization of the atomic contribution to prediction for interpretation of results. kGCN supports three types of approaches, single-task, multi-task, and multimodal predictions. The prediction of compound-protein interaction for four matrixmetalloproteases, MMP-3, -9, -12 and -13, in the inhibition assays is performed as a representative case study using kGCN. Additionally, kGCN provides the visualization of atomic contributions to the prediction. Such visualization is useful for the validation of the prediction models and the design of molecules based on the prediction model, realizing “explainable AI” for understanding the factors affecting AI prediction. kGCN is available at https://github.com/clinfo/kGCN.</div>


2020 ◽  
Author(s):  
Ryosuke Kojima ◽  
Shoichi Ishida ◽  
Masateru Ohta ◽  
Hiroaki Iwata ◽  
Teruki Honma ◽  
...  

<div>Deep learning is developing as an important technology to perform various tasks in cheminformatics. In particular, graph convolutional neural networks (GCNs) have been reported to perform well in many types of prediction tasks related to molecules. Although GCN exhibits considerable potential in various applications, appropriate utilization of this resource for obtaining reasonable and reliable prediction results requires thorough understanding of GCN and programming. To leverage the power of GCN to benefit various users from chemists to cheminformaticians, an open-source GCN tool, kGCN, is introduced. To support the users with various levels of programming skills, kGCN includes three interfaces: a graphical user interface (GUI) employing KNIME for users with limited programming skills such as chemists, as well as command-line and Python library interfaces for users with advanced programming skills such as cheminformaticians. To support the three steps required for building a prediction model, i.e., pre-processing, model tuning, and interpretation of results, kGCN includes functions of typical pre-processing, Bayesian optimization for automatic model tuning, and visualization of the atomic contribution to prediction for interpretation of results. kGCN supports three types of approaches, single-task, multi-task, and multimodal predictions. The prediction of compound-protein interaction for four matrixmetalloproteases, MMP-3, -9, -12 and -13, in the inhibition assays is performed as a representative case study using kGCN. Additionally, kGCN provides the visualization of atomic contributions to the prediction. Such visualization is useful for the validation of the prediction models and the design of molecules based on the prediction model, realizing “explainable AI” for understanding the factors affecting AI prediction. kGCN is available at https://github.com/clinfo/kGCN.</div>


Author(s):  
Hiroki MINAKAWA ◽  
Issaku AZECHI ◽  
Masaomi KIMURA ◽  
Naoto OKUMURA ◽  
Nobuaki KIMURA ◽  
...  

2021 ◽  
Author(s):  
Aravind Nair ◽  
K S S Sai Srujan ◽  
Sayali Kulkarni ◽  
Kshitij Alwadhi ◽  
Navya Jain ◽  
...  

<div><div><div><p>Tropical cyclones (TCs) are the most destructive weather systems that form over the tropical oceans, with 90 storms forming globally every year. The timely detection and tracking of TCs are important for advanced warning to the affected regions. As these storms form over the open oceans far from the continents, remote sensing plays a crucial role in detecting them. Here we present an automated TC detection from satellite images based on a novel deep learning technique. In this study, we propose a multi-staged deep learning framework for the detection of TCs, including, (i) a detector - Mask Region-Convolutional Neural Network (R-CNN), (ii) a wind speed filter, and (iii) a classifier - CNN. The hyperparameters of the entire pipeline is optimized to showcase the best performance using Bayesian optimization. Results indicate that the proposed approach yields high precision (97.10%), specificity (97.59%), and accuracy (86.55%) for test images.</p></div></div></div>


2021 ◽  
Author(s):  
Hemlata Jain ◽  
Ajay Khunteta ◽  
Sumit Private Shrivastav

Abstract Machine Learning and Deep learning classification has become an important topic in the area of Telecom Churn Prediction. Researchers have come out with very efficient experiments for Churn Prediction and have given a new direction to the telecommunication Industry to save their customers. Companies are eagerly developing the models for predicting churn and putting their efforts to save the potential churners. Therefore, for a better churn prediction model, finding the factors of churn is very important. This study is aiming to find the factors of user’s churn by evaluating their past service usage details. For this purpose, study is taking the advantage of feature importance, feature normalisation, feature correlation and feature extraction. After feature selection and extraction this study performing seven different experiments on the dataset to bring out the best results and compared the techniques. First Experiment includes a hybrid model of Decision tree and Logistic Regression, second experiment include PCA with Logistic Regression and Logit Boost, third experiment using a Deep Learning Technique that is CNN-VAE (Convolutional Neural Network with Variational Autoencoder), Fourth, fifth, sixth and seventh experiments was done on Logistic Regression, Logit Boost, XGBoost and Random Forest respectively. First four experiments are hybrid models and rest are using standalone techniques. The Orange dataset was used in this technique which has 3333 subscriber’s entries and 21 features. On the other hand, these experiments are compared with already existing models that have been developed in literature studies. The performance was evaluated using Accuracy, Precision, Recall rate, F-measure, Confusion Matrix, Marco Average and Weighted Average. This study proved to get better results as compared to old models. Random Forest outperformed in this study by achieving 95% Accuracy and all other experiments also produced very good results. The study states the importance of data mining techniques for a churn prediction model and proposes a very good comparison model where all machine Learning Standalone techniques, Deep Learning Technique and hybrid models with Feature Extraction tasks are being used and compared on the same dataset to evaluate the techniques performance better.


Sign in / Sign up

Export Citation Format

Share Document