Model Regeneration Scheme using a Deep Learning Algorithm for Reliable Uncertainty Quantification of Channel Reservoirs

2021 ◽  
pp. 1-20
Author(s):  
Youjun Lee ◽  
Byeongcheol Kang ◽  
Joonyi Kim ◽  
Jonggeun Choe

Abstract Reservoir characterization is one of the essential procedures for decision makings. However, conventional inversion methods of history matching have several inevitable issues of losing geological information and poor performances when it is applied to channel reservoirs. Therefore, we propose a model regeneration scheme for reliable uncertainty quantification of channel reservoirs without conventional model inversion methods. The proposed method consists of three parts: feature extraction, model selection, and model generation. In the feature extraction part, drainage area localization and discrete cosine transform are adopted for channel feature extraction in near-wellbore area. In the model selection part, K-means clustering and an ensemble ranking method are utilized to select models that have similar characteristics to a true reservoir. In the last part, deep convolutional generative adversarial networks (DCGAN) and transfer learning are applied to generate new models similar to the selected models. After the generation, we repeat the model selection process to select final models from the selected and the generated models. We utilize these final models to quantify uncertainty of a channel reservoir by predicting their future productions. After appling the proposed scheme to 3 different channel fields, it provides reliable models for production forecasts with reduced uncertainty. The analyses show that the scheme can effectively characterize channel features and increase a probability of existence of models similar to a true model.

Energies ◽  
2021 ◽  
Vol 14 (6) ◽  
pp. 1557
Author(s):  
Amine Tadjer ◽  
Reidar B. Bratvold

Carbon capture and storage (CCS) has been increasingly looking like a promising strategy to reduce CO2 emissions and meet the Paris agreement’s climate target. To ensure that CCS is safe and successful, an efficient monitoring program that will prevent storage reservoir leakage and drinking water contamination in groundwater aquifers must be implemented. However, geologic CO2 sequestration (GCS) sites are not completely certain about the geological properties, which makes it difficult to predict the behavior of the injected gases, CO2 brine leakage rates through wellbores, and CO2 plume migration. Significant effort is required to observe how CO2 behaves in reservoirs. A key question is: Will the CO2 injection and storage behave as expected, and can we anticipate leakages? History matching of reservoir models can mitigate uncertainty towards a predictive strategy. It could prove challenging to develop a set of history matching models that preserve geological realism. A new Bayesian evidential learning (BEL) protocol for uncertainty quantification was released through literature, as an alternative to the model-space inversion in the history-matching approach. Consequently, an ensemble of previous geological models was developed using a prior distribution’s Monte Carlo simulation, followed by direct forecasting (DF) for joint uncertainty quantification. The goal of this work is to use prior models to identify a statistical relationship between data prediction, ensemble models, and data variables, without any explicit model inversion. The paper also introduces a new DF implementation using an ensemble smoother and shows that the new implementation can make the computation more robust than the standard method. The Utsira saline aquifer west of Norway is used to exemplify BEL’s ability to predict the CO2 mass and leakages and improve decision support regarding CO2 storage projects.


2015 ◽  
Vol 138 (1) ◽  
Author(s):  
Jihoon Park ◽  
Jeongwoo Jin ◽  
Jonggeun Choe

For decision making, it is crucial to have proper reservoir characterization and uncertainty assessment of reservoir performances. Since initial models constructed with limited data have high uncertainty, it is essential to integrate both static and dynamic data for reliable future predictions. Uncertainty quantification is computationally demanding because it requires a lot of iterative forward simulations and optimizations in a single history matching, and multiple realizations of reservoir models should be computed. In this paper, a methodology is proposed to rapidly quantify uncertainties by combining streamline-based inversion and distance-based clustering. A distance between each reservoir model is defined as the norm of differences of generalized travel time (GTT) vectors. Then, reservoir models are grouped according to the distances and representative models are selected from each group. Inversions are performed on the representative models instead of using all models. We use generalized travel time inversion (GTTI) for the integration of dynamic data to overcome high nonlinearity and take advantage of computational efficiency. It is verified that the proposed method gathers models with both similar dynamic responses and permeability distribution. It also assesses the uncertainty of reservoir performances reliably, while reducing the amount of calculations significantly by using the representative models.


Author(s):  
Kunal Parikh ◽  
Tanvi Makadia ◽  
Harshil Patel

Dengue is unquestionably one of the biggest health concerns in India and for many other developing countries. Unfortunately, many people have lost their lives because of it. Every year, approximately 390 million dengue infections occur around the world among which 500,000 people are seriously infected and 25,000 people have died annually. Many factors could cause dengue such as temperature, humidity, precipitation, inadequate public health, and many others. In this paper, we are proposing a method to perform predictive analytics on dengue’s dataset using KNN: a machine-learning algorithm. This analysis would help in the prediction of future cases and we could save the lives of many.


Processes ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 919
Author(s):  
Wanlu Jiang ◽  
Chenyang Wang ◽  
Jiayun Zou ◽  
Shuqing Zhang

The field of mechanical fault diagnosis has entered the era of “big data”. However, existing diagnostic algorithms, relying on artificial feature extraction and expert knowledge are of poor extraction ability and lack self-adaptability in the mass data. In the fault diagnosis of rotating machinery, due to the accidental occurrence of equipment faults, the proportion of fault samples is small, the samples are imbalanced, and available data are scarce, which leads to the low accuracy rate of the intelligent diagnosis model trained to identify the equipment state. To solve the above problems, an end-to-end diagnosis model is first proposed, which is an intelligent fault diagnosis method based on one-dimensional convolutional neural network (1D-CNN). That is to say, the original vibration signal is directly input into the model for identification. After that, through combining the convolutional neural network with the generative adversarial networks, a data expansion method based on the one-dimensional deep convolutional generative adversarial networks (1D-DCGAN) is constructed to generate small sample size fault samples and construct the balanced data set. Meanwhile, in order to solve the problem that the network is difficult to optimize, gradient penalty and Wasserstein distance are introduced. Through the test of bearing database and hydraulic pump, it shows that the one-dimensional convolution operation has strong feature extraction ability for vibration signals. The proposed method is very accurate for fault diagnosis of the two kinds of equipment, and high-quality expansion of the original data can be achieved.


2018 ◽  
Vol 10 (7) ◽  
pp. 1123 ◽  
Author(s):  
Yuhang Zhang ◽  
Hao Sun ◽  
Jiawei Zuo ◽  
Hongqi Wang ◽  
Guangluan Xu ◽  
...  

Aircraft type recognition plays an important role in remote sensing image interpretation. Traditional methods suffer from bad generalization performance, while deep learning methods require large amounts of data with type labels, which are quite expensive and time-consuming to obtain. To overcome the aforementioned problems, in this paper, we propose an aircraft type recognition framework based on conditional generative adversarial networks (GANs). First, we design a new method to precisely detect aircrafts’ keypoints, which are used to generate aircraft masks and locate the positions of the aircrafts. Second, a conditional GAN with a region of interest (ROI)-weighted loss function is trained on unlabeled aircraft images and their corresponding masks. Third, an ROI feature extraction method is carefully designed to extract multi-scale features from the GAN in the regions of aircrafts. After that, a linear support vector machine (SVM) classifier is adopted to classify each sample using their features. Benefiting from the GAN, we can learn features which are strong enough to represent aircrafts based on a large unlabeled dataset. Additionally, the ROI-weighted loss function and the ROI feature extraction method make the features more related to the aircrafts rather than the background, which improves the quality of features and increases the recognition accuracy significantly. Thorough experiments were conducted on a challenging dataset, and the results prove the effectiveness of the proposed aircraft type recognition framework.


2010 ◽  
Vol 22 (12) ◽  
pp. 2979-3035 ◽  
Author(s):  
Stefan Klampfl ◽  
Wolfgang Maass

Neurons in the brain are able to detect and discriminate salient spatiotemporal patterns in the firing activity of presynaptic neurons. It is open how they can learn to achieve this, especially without the help of a supervisor. We show that a well-known unsupervised learning algorithm for linear neurons, slow feature analysis (SFA), is able to acquire the discrimination capability of one of the best algorithms for supervised linear discrimination learning, the Fisher linear discriminant (FLD), given suitable input statistics. We demonstrate the power of this principle by showing that it enables readout neurons from simulated cortical microcircuits to learn without any supervision to discriminate between spoken digits and to detect repeated firing patterns that are embedded into a stream of noise spike trains with the same firing statistics. Both these computer simulations and our theoretical analysis show that slow feature extraction enables neurons to extract and collect information that is spread out over a trajectory of firing states that lasts several hundred ms. In addition, it enables neurons to learn without supervision to keep track of time (relative to a stimulus onset, or the initiation of a motor response). Hence, these results elucidate how the brain could compute with trajectories of firing states rather than only with fixed point attractors. It also provides a theoretical basis for understanding recent experimental results on the emergence of view- and position-invariant classification of visual objects in inferior temporal cortex.


Sign in / Sign up

Export Citation Format

Share Document