CO2 Leakage Rate Forecasting Using Optimized Deep Learning

2021 ◽  
Author(s):  
Xupeng He ◽  
Weiwei Zhu ◽  
Ryan Santoso ◽  
Marwa Alsinan ◽  
Hyung Kwak ◽  
...  

Abstract Geologic CO2 Sequestration (GCS) is a promising engineering technology to reduce global greenhouse emissions. Real-time forecasting of CO2 leakage rates is an essential aspect of large-scale GCS deployment. This work introduces a data-driven, physics-featuring surrogate model based on deep-learning technique for CO2 leakage rate forecasting. The workflow for the development of data-driven, physics-featuring surrogate model includes three steps: 1) Datasets Generation: We first identify uncertainty parameters that affect the objective of interests (i.e., CO2 leakage rates). For the identified uncertainty parameters, various realizations are then generated based on Latin Hypercube Sampling (LHS). High-fidelity simulations based on a two-phase black-oil solver within MRST are performed to generate the objective functions. Datasets including inputs (i.e., the uncertainty parameters) and outputs (CO2 leakage rates) are collected. 2) Surrogate Development: In this step, a time-series surrogate model using long short-term memory (LSTM) is constructed to map the nonlinear relationship between these uncertainty parameters as inputs and CO2 leakage rates as outputs. We perform Bayesian optimization to automate the tuning of hyperparameters and network architecture instead of the traditional trial-error tuning process. 3) Uncertainty Analysis: This step aims to perform Monte Carlo (MC) simulations using the successfully trained surrogate model to explore uncertainty propagation. The sampled realizations are collected in the form of distributions from which the probabilistic forecast of percentiles, P10, P50, and P50, are evaluated. We propose a data-driven, physics-featuring surrogate model based on LSTM for CO2 leakage rate forecasting. We demonstrate its performance in terms of accuracy and efficiency by comparing it with ground-truth solutions. The proposed deep-learning workflow shows promising potential and could be readily implemented in commercial-scale GCS for real-time monitoring applications.

2021 ◽  
Author(s):  
Xupeng He ◽  
Weiwei Zhu ◽  
Ryan Santoso ◽  
Marwa Alsinan ◽  
Hyung Kwak ◽  
...  

Abstract The permeability of fractures, including natural and hydraulic, are essential parameters for the modeling of fluid flow in conventional and unconventional fractured reservoirs. However, traditional analytical cubic law (CL-based) models used to estimate fracture permeability show unsatisfactory performance when dealing with different dynamic complexities of fractures. This work presents a data-driven, physics-included model based on machine learning as an alternative to traditional methods. The workflow for the development of the data-driven model includes four steps. Step 1: Identify uncertain parameters and perform Latin Hypercube Sampling (LHS). We first identify the uncertain parameters which affect the fracture permeability. We then generate training samples using LHS. Step 2: Perform training simulations and collect inputs and outputs. In this step, high-resolution simulations with parallel computing for the Navier-Stokes equations (NSEs) are run for each of the training samples. We then collect the inputs and outputs from the simulations. Step 3: Construct an optimized data-driven surrogate model. A data-driven model based on machine learning is then built to model the nonlinear mapping between the inputs and outputs collected from Step 2. Herein, Artificial Neural Network (ANN) coupling with Bayesian optimization algorithm is implemented to obtain the optimized surrogate model. Step 4: Validate the proposed data-driven model. In this step, we conduct blind validation on the proposed model with high-fidelity simulations. We further test the developed surrogate model with newly generated fracture cases with a broad range of roughness and tortuosity under different Reynolds numbers. We then compare its performance to the reference NSEs solutions. Results show that the developed data-driven model delivers good accuracy exceeding 90% for all training, validation, and test samples. This work introduces an integrated workflow for developing a data-driven, physics-included model using machine learning to estimate fracture permeability under complex physics (e.g., inertial effect). To our knowledge, this technique is introduced for the first time for the upscaling of rock fractures. The proposed model offers an efficient and accurate alternative to the traditional upscaling methods that can be readily implemented in reservoir characterization and modeling workflows.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Xianglin Zhu ◽  
Khalil Ur Rehman ◽  
Wang Bo ◽  
Muhammad Shahzad ◽  
Ahmad Hassan

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8072
Author(s):  
Yu-Bang Chang ◽  
Chieh Tsai ◽  
Chang-Hong Lin ◽  
Poki Chen

As the techniques of autonomous driving become increasingly valued and universal, real-time semantic segmentation has become very popular and challenging in the field of deep learning and computer vision in recent years. However, in order to apply the deep learning model to edge devices accompanying sensors on vehicles, we need to design a structure that has the best trade-off between accuracy and inference time. In previous works, several methods sacrificed accuracy to obtain a faster inference time, while others aimed to find the best accuracy under the condition of real time. Nevertheless, the accuracies of previous real-time semantic segmentation methods still have a large gap compared to general semantic segmentation methods. As a result, we propose a network architecture based on a dual encoder and a self-attention mechanism. Compared with preceding works, we achieved a 78.6% mIoU with a speed of 39.4 FPS with a 1024 × 2048 resolution on a Cityscapes test submission.


2020 ◽  
Vol 196 ◽  
pp. 02007
Author(s):  
Vladimir Mochalov ◽  
Anastasia Mochalova

In this paper, the previously obtained results on recognition of ionograms using deep learning are expanded to predict the parameters of the ionosphere. After the ionospheric parameters have been identified on the ionogram using deep learning in real time, we can predict the parameters for some time ahead on the basis of the new data obtained Examples of predicting the ionosphere parameters using an artificial recurrent neural network architecture long short-term memory are given. The place of the block for predicting the parameters of the ionosphere in the system for analyzing ionospheric data using deep learning methods is shown.


2021 ◽  
Author(s):  
Andrey Gavrilov ◽  
Aleksei Seleznev ◽  
Dmitry Mukhin ◽  
Alexander Feigin

<p>The problem of modeling interaction between processes with different time scales is very important in geoscience. In this report, we propose a new form of empirical evolution operator model based on the analysis of multiple time series representing processes with different time scales. We assume that the time series are given on the same time interval.</p><p>To construct the model, we extend the previously developed general form of nonlinear stochastic model based on artificial neural networks and designed for the case of time series with constant sampling interval [1]. This sampling interval is related to the main time scale of the process under consideration, which is described by the deterministic component of the model, while the faster time scales are modeled by its stochastic component, possibly depending on the system’s state. This model also includes slower processes in the form of weak time-dependence, as well as external forcing. The structure of the model is optimized using Bayesian approach [1]. The model has proven its efficiency in a number of applications [2-4].</p><p>The idea of modeling time series with different time scales is to formulate the above-described model individually for each time scale, and then to include the parameterized influence of the other time scales in it. Particularly, the influence of “slower” time series is included in the form of parameter trends, and the influence of “faster” time series is included by time-averaging their statistics. The algorithm and first results of comparison between the new model and the model without cross-interactions will be discussed.</p><p>The work was supported by the Russian Science Foundation (Grant No. 20-62-46056).</p><p>1. Gavrilov, A., Loskutov, E., & Mukhin, D. (2017). Bayesian optimization of empirical model with state-dependent stochastic forcing. Chaos, Solitons & Fractals, 104, 327–337. http://doi.org/10.1016/j.chaos.2017.08.032</p><p>2. Mukhin, D., Kondrashov, D., Loskutov, E., Gavrilov, A., Feigin, A., & Ghil, M. (2015). Predicting Critical Transitions in ENSO models. Part II: Spatially Dependent Models. Journal of Climate, 28(5), 1962–1976. http://doi.org/10.1175/JCLI-D-14-00240.1</p><p>3. Gavrilov, A., Seleznev, A., Mukhin, D., Loskutov, E., Feigin, A., & Kurths, J. (2019). Linear dynamical modes as new variables for data-driven ENSO forecast. Climate Dynamics, 52(3–4), 2199–2216. http://doi.org/10.1007/s00382-018-4255-7</p><p>4. Mukhin, D., Gavrilov, A., Loskutov, E., Kurths, J., & Feigin, A. (2019). Bayesian Data Analysis for Revealing Causes of the Middle Pleistocene Transition. Scientific Reports, 9(1), 7328. http://doi.org/10.1038/s41598-019-43867-3</p>


2018 ◽  
Vol 2017 (1) ◽  
pp. 238-247 ◽  
Author(s):  
Usman T. Khan ◽  
Jianxun He ◽  
Caterina Valeo

Abstract Urban floods are one of the most devastating natural disasters globally and improved flood prediction is essential for better flood management. Today, high-resolution real-time datasets for flood-related variables are widely available. These data can be used to create data-driven models for improved real-time flood prediction. However, data-driven models have uncertainty stemming from a number of issues: the selection of input data, the optimisation of model architecture, estimation of model parameters, and model output. Addressing these sources of uncertainty will improve flood prediction. In this research, a fuzzy neural network is proposed to predict peak flow in an urban river. The network uses fuzzy numbers to account for the uncertainty in the output and model parameters. An algorithm that uses possibility theory is used to train the network. An adaptation of the automated neural pathway strength feature selection (ANPSFS) method is used to select the input features. A search and optimisation algorithm is used to select the network architecture. Data for the Bow River in Calgary, Canada are used to train and test the network.


Sign in / Sign up

Export Citation Format

Share Document