Machine learning to reduce cycle time for time-lapse seismic data assimilation into reservoir management

Author(s):  
Yang Xue ◽  
Mariela Araujo ◽  
Jorge Lopez
2019 ◽  
Vol 7 (3) ◽  
pp. SE123-SE130
Author(s):  
Yang Xue ◽  
Mariela Araujo ◽  
Jorge Lopez ◽  
Kanglin Wang ◽  
Gautam Kumar

Time-lapse (4D) seismic is widely deployed in offshore operations to monitor improved oil recovery methods including water flooding, yet its value for enhanced well and reservoir management is not fully realized due to the long cycle times required for quantitative 4D seismic data assimilation into dynamic reservoir models. To shorten the cycle, we have designed a simple inversion workflow to estimate reservoir property changes directly from 4D attribute maps using machine-learning (ML) methods. We generated tens of thousands of training samples by Monte Carlo sampling from the rock-physics model within reasonable ranges of the relevant parameters. Then, we applied ML methods to build the relationship between the reservoir property changes and the 4D attributes, and we used the learnings to estimate the reservoir property changes given the 4D attribute maps. The estimated reservoir property changes (e.g., water saturation changes) can be used to analyze injection efficiency, update dynamic reservoir models, and support reservoir management decisions. We can reduce the turnaround time from months to days, allowing early engagements with reservoir engineers to enhance integration. This accelerated data assimilation removes a deterrent for the acquisition of frequent 4D surveys.


SPE Journal ◽  
2021 ◽  
Vol 26 (02) ◽  
pp. 1011-1031
Author(s):  
Gilson Moura Silva Neto ◽  
Ricardo Vasconcellos Soares ◽  
Geir Evensen ◽  
Alessandra Davolio ◽  
Denis José Schiozer

Summary Time-lapse-seismic-data assimilation has been drawing the reservoir-engineering community's attention over the past few years. One of the advantages of including this kind of data to improve the reservoir-flow models is that it provides complementary information compared with the wells' production data. Ensemble-based methods are some of the standard tools used to calibrate reservoir models using time-lapse seismic data. One of the drawbacks of assimilating time-lapse seismic data involves the large data sets, mainly for large reservoir models. This situation leads to high-dimensional problems that demand significant computational resources to process and store the matrices when using conventional and straightforward methods. Another known issue associated with the ensemble-based methods is the limited ensemble sizes, which cause spurious correlations between the data and the parameters and limit the degrees of freedom. In this work, we propose a data-assimilation scheme using an efficient implementation of the subspace ensemble randomized maximum likelihood (SEnRML) method with local analysis. This method reduces the computational requirements for assimilating large data sets because the number of operations scales linearly with the number of observed data points. Furthermore, by implementing it with local analysis, we reduce the memory requirements at each update step and mitigate the effects of the limited ensemble sizes. We test two local analysis approaches: one distance-based approach and one correlation-based approach. We apply these implementations to two synthetic time-lapse-seismic-data-assimilation cases, one 2D example, and one field-scale application that mimics some of the real-field challenges. We compare the results with reference solutions and with the known ensemble smoother with multiple data assimilation (ES-MDA) using Kalman gain distance-based localization. The results show that our method can efficiently assimilate time-lapse seismic data, leading to updated models that are comparable with other straightforward methods. The correlation-based local analysis approach provided results similar to the distance-based approach, with the advantage that the former can be applied to data and parameters that do not have specific spatial positions.


2021 ◽  
pp. 1-59
Author(s):  
Marwa Hussein ◽  
Robert R. Stewart ◽  
Deborah Sacrey ◽  
David H. Johnston ◽  
Jonny Wu

Time-lapse (4D) seismic analysis plays a vital role in reservoir management and reservoir simulation model updates. However, 4D seismic data are subject to interference and tuning effects. Being able to resolve and monitor thin reservoirs of different quality can aid in optimizing infill drilling or locating bypassed hydrocarbons. Using 4D seismic data from the Maui field in the offshore Taranaki basin of New Zealand, we generate typical seismic attributes sensitive to reservoir thickness and rock properties. We find that spectral instantaneous attributes extracted from time-lapse seismic data illuminate more detailed reservoir features compared to those same attributes computed on broadband seismic data. We develop an unsupervised machine learning workflow that enables us to combine eight spectral instantaneous seismic attributes into single classification volumes for the baseline and monitor surveys using self-organizing maps (SOM). Changes in the SOM natural clusters between the baseline and monitor surveys suggest production-related changes that are caused primarily by water replacing gas as the reservoir is being swept under a strong water drive. The classification volumes also facilitate monitoring water saturation changes within thin reservoirs (ranging from very good to poor quality) as well as illuminating thin baffles. Thus, these SOM classification volumes show internal reservoir heterogeneity that can be incorporated into reservoir simulation models. Using meaningful SOM clusters, geobodies are generated for the baseline and monitor SOM classifications. The recoverable gas reserves for those geobodies are then computed and compared to production data. The SOM classifications of the Maui 4D seismic data seems to be sensitive to water saturation change and subtle pressure depletions due to gas production under a strong water drive.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.


Author(s):  
Hyunggu Jun ◽  
Yongchae Cho

Summary In an ideal case, the time-lapse differences in 4D seismic data should only reflect the changes of the subsurface geology. Practically, however, undesirable discrepancies are generated because of various reasons. Therefore, proper time-lapse processing techniques are required to improve the repeatability of time-lapse seismic data and to capture accurate seismic information to analyze target changes. In this study, we propose a machine learning-based time-lapse seismic data processing method improving repeatability. A training data construction method, training strategy, and machine learning network architecture based on a convolutional autoencoder are proposed. Uniform manifold approximation and projection are applied to the training and target data to analyze the features corresponding to each data point. When the feature distribution of the training data is different from the target data, we implement data augmentation to enhance the diversity of the training data. The method is verified through numerical experiments using both synthetic and field time-lapse seismic data, and the results are analyzed with several methods, including a comparison of repeatability metrics. From the results of the numerical experiments, we can conclude that the proposed convolutional autoencoder can enhance the repeatability of the time-lapse seismic data and increase the accuracy of observed variations in seismic signals generated from target changes.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 1052
Author(s):  
Baozhong Wang ◽  
Jyotsna Sharma ◽  
Jianhua Chen ◽  
Patricia Persaud

Estimation of fluid saturation is an important step in dynamic reservoir characterization. Machine learning techniques have been increasingly used in recent years for reservoir saturation prediction workflows. However, most of these studies require input parameters derived from cores, petrophysical logs, or seismic data, which may not always be readily available. Additionally, very few studies incorporate the production data, which is an important reflection of the dynamic reservoir properties and also typically the most frequently and reliably measured quantity throughout the life of a field. In this research, the random forest ensemble machine learning algorithm is implemented that uses the field-wide production and injection data (both measured at the surface) as the only input parameters to predict the time-lapse oil saturation profiles at well locations. The algorithm is optimized using feature selection based on feature importance score and Pearson correlation coefficient, in combination with geophysical domain-knowledge. The workflow is demonstrated using the actual field data from a structurally complex, heterogeneous, and heavily faulted offshore reservoir. The random forest model captures the trends from three and a half years of historical field production, injection, and simulated saturation data to predict future time-lapse oil saturation profiles at four deviated well locations with over 90% R-square, less than 6% Root Mean Square Error, and less than 7% Mean Absolute Percentage Error, in each case.


Sign in / Sign up

Export Citation Format

Share Document