An Efficient Deep Learning-Based Workflow Incorporating a Reduced Physics Model for Subsurface Imaging in Unconventional Reservoirs

2021 ◽  
Author(s):  
Tsubasa Onishi ◽  
Hongquan Chen ◽  
Akhil Datta-Gupta ◽  
Srikanta Mishra

Abstract We present a novel deep learning-based workflow incorporating a reduced physics model that can efficiently visualize well drainage volume and pressure front propagation in unconventional reservoirs in near real-time. The visualizations can be readily used for qualitative and quantitative characterization and forecasting of unconventional reservoirs. Our aim is to develop an efficient workflow that allows us to ‘see’ within the subsurface given measured data, such as production data. The most simplistic way to achieve the goal will be to merely train a deep learning-based regression model where the input consists of some measured data, and the output is a subsurface image, such as pressure field. However, the high output dimension that corresponds to spatio-temporal steps makes the training inefficient. To address this challenge, an autoencoder network is applied to discover lower dimensional latent variables that represent high dimensional output images. In our approach, the regression model is trained to predict latent variables, instead of directly constructing an image. In the prediction step, the trained regression model first predicts latent variables given measured data, then the latent variables will be used as inputs of the trained decoder to generate a subsurface image. In addition, fast marching-method (FMM)-based rapid simulation workflow which transforms original 2D or 3D problems into 1D problems is used in place of full-physics simulation to efficiently generate datasets for training. The capability of the FMM-based rapid simulation allows us to generate sufficient datasets within realistic simulation times, even for field scale applications. We first demonstrate the proposed approach using a simple illustrative example. Next, the approach is applied to a field scale reservoir model built after the publicly available data on the Hydraulic Fracturing Test Site-I (HFTS-I), which is sufficiently complex to demonstrate the power and efficacy of the approach. We will further demonstrate the utility of the approach to account for subsurface uncertainty. Our approach, for the first time, allows data-driven visualization of unconventional well drainage volume in 3D. The novelty of our approach is the framework which combines the strengths of deep learning-based models and the FMM-based rapid simulation. The workflow has flexibility to incorporate various spatial and temporal data types.

SPE Journal ◽  
2015 ◽  
Vol 20 (04) ◽  
pp. 831-841 ◽  
Author(s):  
Jiang Xie ◽  
Changdong Yang ◽  
Neha Gupta ◽  
Michael J. King ◽  
Akhil Datta-Gupta

Summary The concept of depth of investigation is fundamental to well-test analysis. Much of the current well-test analysis relies on solutions based on homogeneous or layered reservoirs. Well-test analysis in spatially heterogeneous reservoirs is complicated by the fact that Green's function for heterogeneous reservoirs is difficult to obtain analytically. In this paper, we introduce a novel approach for computing the depth of investigation and pressure response in spatially heterogeneous and fractured unconventional reservoirs. In our approach, we first present an asymptotic solution of the diffusion equation in heterogeneous reservoirs. Considering terms of highest frequencies in the solution, we obtain two equations: the Eikonal equation that governs the propagation of a pressure “front” and the transport equation that describes the pressure amplitude as a function of space and time. The Eikonal equation generalizes the depth of investigation for heterogeneous reservoirs and provides a convenient way to calculate drainage volume. From drainage-volume calculations, we estimate a generalized pressure solution on the basis of a geometric approximation of the drainage volume. A major advantage of our approach is that one can solve very efficiently the Eikonal equation with a class of front-tracking methods called the fast-marching methods. Thus, one can obtain transient-pressure response in multimillion-cell geologic models in seconds without resorting to reservoir simulators. We first visualize the depth of investigation and pressure solution for a homogeneous unconventional reservoir with multistage transverse fractures, and identify flow regimes from a pressure-diagnostic plot. And then, we apply the technique to a heterogeneous unconventional reservoir to predict the depth of investigation and pressure behavior. The computation is orders-of-magnitude faster than conventional numerical simulation, and provides a foundation for future work in reservoir characterization and field-development optimization.


SPE Journal ◽  
2016 ◽  
Vol 21 (06) ◽  
pp. 2276-2288 ◽  
Author(s):  
Yusuke Fujita ◽  
Akhil Datta-Gupta ◽  
Michael J. King

Summary Modeling of fluid flow in unconventional reservoirs requires accurate characterization of complex flow mechanisms because of the interactions between reservoir rock, microfractures, and hydraulic fractures. The pore-size distribution in shale and tight sand reservoirs typically ranges from nanometers to micrometers, resulting in ultralow permeabilities. In such extremely low-permeability reservoirs, desorption and diffusive processes play important roles in addition to heterogeneity-driven convective flows. For modeling shale and tight oil and gas reservoirs, we can compute the well-drainage volume efficiently with a fast marching method (FMM) and by introducing the concept of “diffusive time of flight” (DTOF). Our proposed simulation approach consists of two decoupled steps—drainage-volume calculation and numerical simulation with DTOF as a spatial coordinate. We first calculate the reservoir drainage volume and the DTOF with the FMM, and then the numerical simulation is conducted along the 1D DTOF coordinate. The approach is analogous to streamline modeling whereby a multidimensional simulation is decoupled to a series of 1D simulations resulting in substantial savings in computation time for high-resolution simulation. However, instead of a “convective time of flight” (CTOF), a DTOF is introduced to model the pressure-front propagation. For modeling physical processes, we propose triple continua whereby the reservoir is divided into three different domains: microscale pores (hydraulic fractures and microfractures), nanoscale pores (nanoporous networks), and organic matter. The hydraulic fractures/microfractures primarily contribute to the well production, and are affected by rock compaction. The nanoporous networks contain adsorbed gas molecules, and gas flows into fractures by convection and Knudsen diffusion processes. The organic matter acts as the source of gas. Our simulation approach enables high-resolution flow characterization of unconventional reservoirs because of its efficiency and versatility. We demonstrate the power and utility of our approach with synthetic and field examples.


2020 ◽  
Vol 15 ◽  
Author(s):  
Deeksha Saxena ◽  
Mohammed Haris Siddiqui ◽  
Rajnish Kumar

Background: Deep learning (DL) is an Artificial neural network-driven framework with multiple levels of representation for which non-linear modules combined in such a way that the levels of representation can be enhanced from lower to a much abstract level. Though DL is used widely in almost every field, it has largely brought a breakthrough in biological sciences as it is used in disease diagnosis and clinical trials. DL can be clubbed with machine learning, but at times both are used individually as well. DL seems to be a better platform than machine learning as the former does not require an intermediate feature extraction and works well with larger datasets. DL is one of the most discussed fields among the scientists and researchers these days for diagnosing and solving various biological problems. However, deep learning models need some improvisation and experimental validations to be more productive. Objective: To review the available DL models and datasets that are used in disease diagnosis. Methods: Available DL models and their applications in disease diagnosis were reviewed discussed and tabulated. Types of datasets and some of the popular disease related data sources for DL were highlighted. Results: We have analyzed the frequently used DL methods, data types and discussed some of the recent deep learning models used for solving different biological problems. Conclusion: The review presents useful insights about DL methods, data types, selection of DL models for the disease diagnosis.


Sensors ◽  
2021 ◽  
Vol 21 (3) ◽  
pp. 884
Author(s):  
Chia-Ming Tsai ◽  
Yi-Horng Lai ◽  
Yung-Da Sun ◽  
Yu-Jen Chung ◽  
Jau-Woei Perng

Numerous sensors can obtain images or point cloud data on land, however, the rapid attenuation of electromagnetic signals and the lack of light in water have been observed to restrict sensing functions. This study expands the utilization of two- and three-dimensional detection technologies in underwater applications to detect abandoned tires. A three-dimensional acoustic sensor, the BV5000, is used in this study to collect underwater point cloud data. Some pre-processing steps are proposed to remove noise and the seabed from raw data. Point clouds are then processed to obtain two data types: a 2D image and a 3D point cloud. Deep learning methods with different dimensions are used to train the models. In the two-dimensional method, the point cloud is transferred into a bird’s eye view image. The Faster R-CNN and YOLOv3 network architectures are used to detect tires. Meanwhile, in the three-dimensional method, the point cloud associated with a tire is cut out from the raw data and is used as training data. The PointNet and PointConv network architectures are then used for tire classification. The results show that both approaches provide good accuracy.


1998 ◽  
Vol 37 (12) ◽  
pp. 363-370 ◽  
Author(s):  
Jacob Carstensen ◽  
Marinus K. Nielsen ◽  
Helle Strandbæk

Three different methodologies are assessed which provide predictions of the hydraulic load to the treatment plant one hour ahead. The three models represent three different levels of complexity ranging from a simple regression model over an adaptive grey-box model to a complex hydrological and full dynamical wave model. The simple regression model is estimated as a transfer function model of rainfall intensity to influent flow. It also provides a model for the base flow. The grey-box model is a state space model which incorporates adaptation to the dry weather flow as well as the rainfall runoff. The full dynamical flow model is a distributed deterministic model with many parameters, which has been calibrated based on extensive measurement campaigns in the sewer system. The three models are compared by the ability to predict the hydraulic load one hour ahead. Five rain events in a test period are used for evaluating the three different methods. The predictions are compared to the actual measured flow at the plant one hour later. The results show that the simple regression model and the adaptive grey-box model which are identified and estimated on measured data perform significantly better than the hydrological and full dynamical flow model which is not identifiable and needs calibration by hand. For frontal rains no significant difference in the prediction performance between the simple regression model and the adaptive grey-box model is observed. This is due to a rather uniform distribution of frontal rains. A single convective rain justifies the adaptivity of the grey-box model for non-uniformly distributed rain, i.e. the predictions of the grey-box model were significantly better than the predictions of the simple regression model for this rain event. In general, models for model-based predictive control should be kept simple and identifiable from measured data.


2019 ◽  
Author(s):  
Kathleen Gates ◽  
Kenneth Bollen ◽  
Zachary F. Fisher

Researchers across many domains of psychology increasingly wish to arrive at personalized and generalizable dynamic models of individuals’ processes. This is seen in psychophysiological, behavioral, and emotional research paradigms, across a range of data types. Errors of measurement are inherent in most data. For this reason, researchers typically gather multiple indicators of the same latent construct and use methods, such as factor analysis, to arrive at scores from these indices. In addition to accurately measuring individuals, researchers also need to find the model that best describes the relations among the latent constructs. Most currently available data-driven searches do not include latent variables. We present an approach that builds from the strong foundations of Group Iterative Multiple Model Estimation (GIMME), the idiographic filter, and model implied instrumental variables with two-stage least squares estimation (MIIV-2SLS) to provide researchers with the option to include latent variables in their data-driven model searches. The resulting approach is called Latent Variable GIMME (LV-GIMME). GIMME is utilized for the data-driven search for relations that exist among latent variables. Unlike other approaches such as the idiographic filter, LV-GIMME does not require that the latent variable model to be constant across individuals. This requirement is loosened by utilizing MIIV-2SLS for estimation. Simulated data studies demonstrate that the method can reliably detect relations among latent constructs, and that latent constructs provide more power to detect effects than using observed variables directly. We use empirical data examples drawn from functional MRI and daily self-report data.


2021 ◽  
Author(s):  
Amrit Kashyap ◽  
Sergey Plis ◽  
Michael Schirner ◽  
Petra Ritter ◽  
Shella Keilholz

Brain Network Models (BNMs) are a family of dynamical systems that simulate whole brain activity using neural mass models to represent local activity in different brain regions that influence each other via a global structural network. Research has been interested in using these network models to explain measured whole brain activity measured via resting state functional magnetic resonance imaging (rs-fMRI). Properties computed over longer periods of simulated and measured data such as average functional connectivity (FC), have shown to be comparable with similar properties estimated from measured rs-fMRI data. While this shows that these network models have similar properties over the dynamical landscape, it is unclear how well simulated trajectories compare with empirical trajectories on a timepoint-by-timepoint basis. Previous studies have shown that BNMs are able to produce relevant features at shorter timescales, but analysis of short-term trajectories or transient dynamics as defined by synchronized predictions from BNM made at the same timescale as the collected data has not yet been conducted. Relevant neural processes exist in the time frame of measurements and are often used in task fMRI studies to understand neural responses to behavioral cues. Therefore, it is important to investigate how much of these dynamics are captured by our current brain simulations. To test the nature of BNMs short term trajectories against observed data, we utilize a deep learning technique known as Neural ODE that based on an observed sequence of fMRI measurements, estimates the initial conditions such that the BNMs simulation is synchronized to produce the closest trajectory relative to the observed data. We test to see if the parameterization of a specific well studied BNM, the Firing Rate Model, calculated by maximizing its accuracy in reproducing observed short term trajectories matches with the parameterized model that produces the best average long-term metrics. Our results show that such an agreement between parameterization using long and short simulation analysis exists if also considering other factors such as the sensitivity in accuracy with relative to changes in structural connectivity. Therefore, we conclude that there is evidence that by solving for initial conditions, BNMs can be simulated in a meaningful way when comparing against measured data trajectories, although future studies are necessary to establish how BNM activity relate to behavioral variables or to faster neural processes during this time period.


Author(s):  
Du Chunqi ◽  
Shinobu Hasegawa

In computer vision and computer graphics, 3D reconstruction is the process of capturing real objects’ shapes and appearances. 3D models always can be constructed by active methods which use high-quality scanner equipment, or passive methods that learn from the dataset. However, both of these two methods only aimed to construct the 3D models, without showing what element affects the generation of 3D models. Therefore, the goal of this research is to apply deep learning to automatically generating 3D models, and finding the latent variables which affect the reconstructing process. The existing research GANs can be trained in little data with two networks called Generator and Discriminator, respectively. Generator can produce synthetic data, and Discriminator can discriminate between the generator’s output and real data. The existing research shows that InFoGAN can maximize the mutual information between latent variables and observation. In our approach, we will generate the 3D models based on InFoGAN and design two constraints, shape-constraint and parameters-constraint, respectively. Shape-constraint utilizes the data augmentation method to limit the synthetic data generated in the models’ profiles. At the same time, we also try to employ parameters-constraint to find the 3D models’ relationship corresponding to the latent variables. Furthermore, our approach will be a challenge in the architecture of generating 3D models built on InFoGAN. Finally, in the process of generation, we might discover the contribution of the latent variables influencing the 3D models to the whole network.


2021 ◽  
Author(s):  
◽  
Aijing Feng

The world population is estimated to increase by 2 billion in the next 30 years, and global crop production needs to double by 2050 to meet the projected demands from rising population, diet shifts, and increasing biofuels consumption. Improving the production of the major crops has become an increasing concern for the global research community. However, crop development and yield are complex and determined by many factors, such as crop genotypes (varieties), growing environments (e.g., weather, soil, microclimate and location), and agronomic management strategies (e.g., seed treatment and placement, planting, fertilizer and pest management). To develop next-generation and high-efficiency agriculture production systems, we will have to solve the complex equation consisting of the interactions of genotype, environment and management (GxExM) using emerging technologies. Precision agriculture is a promising agriculture practice to increase profitability and reduce environmental impact using site-specific and accurate measurement of crop, soil and environment. The success of precision agriculture technology heavily relies on access to accurate and high-resolution spatiotemporal data and reliable prediction models of crop development and yield. Soil texture and weather conditions are important factors related to crop growth and yield. The percentages of sand, clay and silt in the soil affect the movement of air and water, as well as the water holding capacity. Weather conditions, including temperature, wind, humidity and solar irradiance, are determining factors for crop evapotranspiration and water requirements. Compared to crop yield, which is easy to measure and quantify, crop development effects due to the soil texture and weather conditions within a season can be challenging to measure and quantify. Evaluation of crop development by visual observation at field scale is time-consuming and subjective. In recent years, sensor-based methods have provided a promising way to measure and quantify crop development. Unmanned aerial vehicles (UAVs) equipped with visual sensors, multispectral sensors and/or hyperspectral sensors have been used as a high-throughput data collection tool by many researchers to monitor crop development efficiently at the desired time and at field-scale. In this study, UAV-based remote sensing technologies combining with soil texture and weather conditions were used to study the crop emergence, crop development and yield under the effects of varying soil texture and weather conditions in a cotton research field. Soil texture, i.e., sand and clay content, calculated using apparent soil electrical conductivity (EC [subscript a]) based on a model from a previous study, was used to estimate soil characteristics, including field capacity, wilting point and total available water. Weather data were obtained from a weather station 400 m from the field. UAV imagery data were collected using a high-resolution RGB camera, a multispectral camera and a thermal camera from the crop emergence to before harvesting on a monthly basis. An automatic method to count emerged crop seedlings based on image technologies and a deep learning model was developed for near real-time cotton emergence evaluation. The soil and elevation effects on the stand count and seedling size were explored. The effects of soil texture and weather conditions on cotton growth variation were examined using multispectral images and thermal images during the crop development growth stages. The cotton yield variations due to soil texture and weather conditions were estimated using multiple-year UAV imagery data, soil texture, weather conditions and deep learning techniques. The results showed that field elevation had a high impact on cotton emergence (stand count and seedling size) and clay content had a negative impact on cotton emergence in this study. Monthly growth variations of cotton under different soil textures during crop development growth stages were significant in both 2018 and 2019. Soil clay content in shallow layers (0-40 cm) affected crop development in early growth stages (June and July) while clay content in deep layers (40-70 cm) affected the mid-season growth stages (August and September). Thermal images were more efficient in identifying regions of water stress compared to the water stress coefficient Ks calculated using data of soil texture and weather conditions. Results showed that cotton yield for each one of the three years (2017-2019) could be predicted using the model trained with data of the other two years with prediction errors of MAE = 247 (8.9 [percent]) to 384 kg ha [superscript -1] (13.7 [percent]), which showed that quantifying yield variability for a future year based on soil texture, weather conditions and UAV imagery was feasible. Results from this research indicated that the integration of soil and weather information and UAV-based image data is a promising way to understand the effects of soil and weather on crop emergence, crop development and yield.


2021 ◽  
Author(s):  
James Rooney ◽  
Stephan Böse-O’Reilly ◽  
Stefan Rakete

AbstractIntroductionUnravelling the health effects of multiple pollutants presents scientific and computational challenges. CorEx is an unsupervised learning algorithm that can efficiently discover multiple latent factors in highly multivariate datasets. Here, we used the CorEx algorithm to perform a hypothesis free analysis of demographic, biochemical, and toxic metal biomarker data.MethodsOur data included 77 variables from 2,750 adult participants of the National Health and Nutrition Examination Survey (NHANES 2015-2016). We used an implementation of the CorEx algorithm designed to deal with the features of bioinformatic datasets including mixed data-types. Models were fit for a range of possible latent variables and the best fit model was selected as that which resulted in the largest Total Correlation (TC) after adjustment for the number of parameters. Successive layers of CorEx were run to discovered hierarchical data structure.ResultsThe CorEx algorithm identified 20 variable clusters at the first layer. For the majority clusters, the associations between variables were consistent with known associations – e.g. gender and the hormones, estradiol and testosterone were included in the first cluster; blood organic mercury and blood total mercury were grouped in cluster 4, and cluster 6 included the liver function enzymes ALT, AST and GGT. At the second layer, 3 branches of were identified reflecting hierarchical structure. The first branch included numerous physiological biomarkers and several exogenous biomarkers. The second branch included a number endogenous and exogenous variables previously associated with hypertension, while the third branch included mercury biomarkers and some related endogenous biomarkers.DiscussionWe have demonstrated the CorEx algorithm as a useful tool for hypothesis free exploration of a biomedical dataset. This work extends previous implementations of CorEx by allowing mixed data-types to be modelled and the results showed that CorEx detected meaningful hierarchical structure. CorEx may facilitate exploration of novel datasets in future.


Sign in / Sign up

Export Citation Format

Share Document