Physics-Aware Deep-Learning-Based Proxy Reservoir Simulation Model Equipped with State and Well Output Prediction

2021 ◽  
Author(s):  
Emilio J. R. Coutinho ◽  
Marcelo J. Aqua and Eduardo Gildin

Abstract Physics-aware machine learning (ML) techniques have been used to endow data-driven proxy models with features closely related to the ones encountered in nature. Examples span from material balance and conservation laws. Physics-based and data-driven reduced-order models or a combination thereof (hybrid-based models) can lead to fast, reliable, and interpretable simulations used in many reservoir management workflows. We built on a recently developed deep-learning-based reduced-order modeling framework by adding a new step related to information of the input-output behavior (e.g., well rates) of the reservoir and not only the states (e.g., pressure and saturation) matching. A Combination of data-driven model reduction strategies and machine learning (deep- neural networks – NN) will be used here to achieve state and input-output matching simultaneously. In Jin, Liu and Durlofsky (2020), the authors use a NN architecture where it is possible to predict the state variables evolution after training an autoencoder coupled with a control system approach (Embed to Control - E2C) and adding some physical components (Loss functions) to the neural network training procedure. In this paper, we extend this idea by adding the simulation model output, e.g., well bottom-hole pressure and well flowrates, as data to be used in the training procedure. Additionally, we added a new neural network to the E2C transition model to handle the connections between state variables and model outputs. By doing this, it is possible to estimate the evolution in time of both the state variables as well as the output variables simultaneously. The method proposed provides a fast and reliable proxy for the simulation output, which can be applied to a well-control optimization problem. Such a non-intrusive method, like data-driven models, does not need to have access to reservoir simulation internal structure. So it can be easily applied to commercial reservoir simulations. We view this as an analogous step to system identification whereby mappings related to state dynamics, inputs (controls), and measurements (output) are obtained. The proposed method is applied to an oil-water model with heterogeneous permeability, 4 injectors, and 5 producer wells. We used 300 sampled well control sets to train the autoencoder and another set to validate the obtained autoencoder parameters.

Author(s):  
Emilio Jose Rocha Coutinho ◽  
Marcelo Dall’Aqua ◽  
Eduardo Gildin

Data-driven methods have been revolutionizing the way physicists and engineers handle complex and challenging problems even when the physics is not fully understood. However, these models very often lack interpretability. Physics-aware machine learning (ML) techniques have been used to endow proxy models with features closely related to the ones encountered in nature; examples span from material balance to conservation laws. In this study, we proposed a hybrid-based approach that incorporates physical constraints (physics-based) and yet is driven by input/output data (data-driven), leading to fast, reliable, and interpretable reservoir simulation models. To this end, we built on a recently developed deep learning–based reduced-order modeling framework by adding a new step related to information on the input–output behavior (e.g., well rates) of the reservoir and not only the states (e.g., pressure and saturation) matching. A deep-neural network (DNN) architecture is used to predict the state variables evolution after training an autoencoder coupled with a control system approach (Embed to Control—E2C) along with the addition of some physical components (loss functions) to the neural network training procedure. Here, we extend this idea by adding the simulation model output, for example, well bottom-hole pressure and well flow rates, as data to be used in the training procedure. Additionally, we introduce a new architecture to the E2C transition model by adding a new neural network component to handle the connections between state variables and model outputs. By doing this, it is possible to estimate the evolution in time of both the state and output variables simultaneously. Such a non-intrusive data-driven method does not need to have access to the reservoir simulation internal structure, so it can be easily applied to commercial reservoir simulators. The proposed method is applied to an oil–water model with heterogeneous permeability, including four injectors and five producer wells. We used 300 sampled well control sets to train the autoencoder and another set to validate the obtained autoencoder parameters. We show our proxy’s accuracy and robustness by running two different neural network architectures (propositions 2 and 3), and we compare our results with the original E2C framework developed for reservoir simulation.


Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 5150
Author(s):  
Shiza Mushtaq ◽  
M. M. Manjurul Islam ◽  
Muhammad Sohaib

This paper presents a comprehensive review of the developments made in rotating bearing fault diagnosis, a crucial component of a rotatory machine, during the past decade. A data-driven fault diagnosis framework consists of data acquisition, feature extraction/feature learning, and decision making based on shallow/deep learning algorithms. In this review paper, various signal processing techniques, classical machine learning approaches, and deep learning algorithms used for bearing fault diagnosis have been discussed. Moreover, highlights of the available public datasets that have been widely used in bearing fault diagnosis experiments, such as Case Western Reserve University (CWRU), Paderborn University Bearing, PRONOSTIA, and Intelligent Maintenance Systems (IMS), are discussed in this paper. A comparison of machine learning techniques, such as support vector machines, k-nearest neighbors, artificial neural networks, etc., deep learning algorithms such as a deep convolutional network (CNN), auto-encoder-based deep neural network (AE-DNN), deep belief network (DBN), deep recurrent neural network (RNN), and other deep learning methods that have been utilized for the diagnosis of rotary machines bearing fault, is presented.


Minerals ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. 102 ◽  
Author(s):  
Tao Sun ◽  
Hui Li ◽  
Kaixing Wu ◽  
Fei Chen ◽  
Zhong Zhu ◽  
...  

Predictive modelling of mineral prospectivity, a critical, but challenging procedure for delineation of undiscovered prospective targets in mineral exploration, has been spurred by recent advancements of spatial modelling techniques and machine learning algorithms. In this study, a set of machine learning methods, including random forest (RF), support vector machine (SVM), artificial neural network (ANN), and a deep learning convolutional neural network (CNN), were employed to conduct a data-driven W prospectivity modelling of the southern Jiangxi Province, China. A total of 118 known W occurrences derived from long-term exploration of this brownfield area and eight evidential layers of multi-source geoscience information related to W mineralization constituted the input datasets. This provided a data-rich foundation for training machine learning models. The optimal configuration of model parameters was trained by a grid search procedure and validated by 10-fold cross-validation. The resulting predictive models were comprehensively assessed by a confusion matrix, receiver operating characteristic curve, and success-rate curve. The modelling results indicate that the CNN model achieves the best classification performance with an accuracy of 92.38%, followed by the RF model (87.62%). In contrast, the RF model outperforms the rest of ML models in overall predictive performance and predictive efficiency. This is characterized by the highest value of area under the curve and the steepest slope of success-rate curve. The RF model was chosen as the optimal model for mineral prospectivity in this region as it is the best predictor. The prospective zones delineated by the prospectivity map occupy 9% of the study area and capture 66.95% of the known mineral occurrences. The geological interpretation of the model reveals that previously neglected Mn anomalies are significant indicators. This implies that enrichment of ore-forming material in the host rocks may play an important role in the formation process of wolframite and can represent an innovative exploration criterion for further exploration in this area.


2021 ◽  
Author(s):  
Katherine Cosburn ◽  
Mousumi Roy

<p>The ability to accurately and reliably obtain images of shallow subsurface anomalies within the Earth is important for hazard monitoring at many geologic structures, such as volcanic edifices. In recent years, the use of machine learning as a novel, data-driven approach to addressing complex inverse problems in the geosciences has gained increasing attention, particularly in the field of seismology. Here we present a physics-based, machine learning method to integrate disparate geophysical datasets for shallow subsurface imaging. We develop a methodology for imaging static density variations at a volcano with well-characterized topography by pairing synthetic cosmic-ray muon and gravity datasets. We use an artificial neural network (ANN) to interpret noisy synthetic datasets generated using theoretical knowledge of the forward kernels that relate these datasets to density. The deep learning model is trained with synthetic data from a suite of possible anomalous density structures and its accuracy is determined by comparing against the known forward calculation.<span> </span></p><p>In essence, we have converted a traditional inversion problem into a pattern recognition tool, where the ANN learns to predict discrete anomalous patterns within a target structure. Given a comprehensive suite of possible patterns and an appropriate amount of added noise to the synthetic data, the ANN can then interpolate the best-fit anomalous pattern given data it has never seen before, such as those obtained from field measurements. The power of this approach is its generality, and our methodology may be applied to a range of observables, such as seismic travel times and electrical conductivity. Our method relies on physics-based forward kernels that connect observations to physical parameters, such as density, temperature, composition, porosity, and saturation. The key benefit in using a physics-based approach as opposed to a data-driven one is the ability to get accurate predictions in cases where the amount of data may be too sparse or difficult to obtain to reliably train a neural network. We compare our approach to a traditional inversion, where appropriate, and highlight the (dis)advantages of the deep learning model.</p>


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1694
Author(s):  
Mathew Ashik ◽  
A. Jyothish ◽  
S. Anandaram ◽  
P. Vinod ◽  
Francesco Mercaldo ◽  
...  

Malware is one of the most significant threats in today’s computing world since the number of websites distributing malware is increasing at a rapid rate. Malware analysis and prevention methods are increasingly becoming necessary for computer systems connected to the Internet. This software exploits the system’s vulnerabilities to steal valuable information without the user’s knowledge, and stealthily send it to remote servers controlled by attackers. Traditionally, anti-malware products use signatures for detecting known malware. However, the signature-based method does not scale in detecting obfuscated and packed malware. Considering that the cause of a problem is often best understood by studying the structural aspects of a program like the mnemonics, instruction opcode, API Call, etc. In this paper, we investigate the relevance of the features of unpacked malicious and benign executables like mnemonics, instruction opcodes, and API to identify a feature that classifies the executable. Prominent features are extracted using Minimum Redundancy and Maximum Relevance (mRMR) and Analysis of Variance (ANOVA). Experiments were conducted on four datasets using machine learning and deep learning approaches such as Support Vector Machine (SVM), Naïve Bayes, J48, Random Forest (RF), and XGBoost. In addition, we also evaluate the performance of the collection of deep neural networks like Deep Dense network, One-Dimensional Convolutional Neural Network (1D-CNN), and CNN-LSTM in classifying unknown samples, and we observed promising results using APIs and system calls. On combining APIs/system calls with static features, a marginal performance improvement was attained comparing models trained only on dynamic features. Moreover, to improve accuracy, we implemented our solution using distinct deep learning methods and demonstrated a fine-tuned deep neural network that resulted in an F1-score of 99.1% and 98.48% on Dataset-2 and Dataset-3, respectively.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3068
Author(s):  
Soumaya Dghim ◽  
Carlos M. Travieso-González ◽  
Radim Burget

The use of image processing tools, machine learning, and deep learning approaches has become very useful and robust in recent years. This paper introduces the detection of the Nosema disease, which is considered to be one of the most economically significant diseases today. This work shows a solution for recognizing and identifying Nosema cells between the other existing objects in the microscopic image. Two main strategies are examined. The first strategy uses image processing tools to extract the most valuable information and features from the dataset of microscopic images. Then, machine learning methods are applied, such as a neural network (ANN) and support vector machine (SVM) for detecting and classifying the Nosema disease cells. The second strategy explores deep learning and transfers learning. Several approaches were examined, including a convolutional neural network (CNN) classifier and several methods of transfer learning (AlexNet, VGG-16 and VGG-19), which were fine-tuned and applied to the object sub-images in order to identify the Nosema images from the other object images. The best accuracy was reached by the VGG-16 pre-trained neural network with 96.25%.


Soft Matter ◽  
2020 ◽  
Author(s):  
Ulices Que-Salinas ◽  
Pedro Ezequiel Ramirez-Gonzalez ◽  
Alexis Torres-Carbajal

In this work we implement a machine learning method to predict the thermodynamic state of a liquid using only its microscopic structure provided by the radial distribution function (RDF). The...


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


Sign in / Sign up

Export Citation Format

Share Document