Subspace Ensemble Randomized Maximum Likelihood with Local Analysis for Time-Lapse-Seismic-Data Assimilation

SPE Journal ◽  
2021 ◽  
Vol 26 (02) ◽  
pp. 1011-1031
Author(s):  
Gilson Moura Silva Neto ◽  
Ricardo Vasconcellos Soares ◽  
Geir Evensen ◽  
Alessandra Davolio ◽  
Denis José Schiozer

Summary Time-lapse-seismic-data assimilation has been drawing the reservoir-engineering community's attention over the past few years. One of the advantages of including this kind of data to improve the reservoir-flow models is that it provides complementary information compared with the wells' production data. Ensemble-based methods are some of the standard tools used to calibrate reservoir models using time-lapse seismic data. One of the drawbacks of assimilating time-lapse seismic data involves the large data sets, mainly for large reservoir models. This situation leads to high-dimensional problems that demand significant computational resources to process and store the matrices when using conventional and straightforward methods. Another known issue associated with the ensemble-based methods is the limited ensemble sizes, which cause spurious correlations between the data and the parameters and limit the degrees of freedom. In this work, we propose a data-assimilation scheme using an efficient implementation of the subspace ensemble randomized maximum likelihood (SEnRML) method with local analysis. This method reduces the computational requirements for assimilating large data sets because the number of operations scales linearly with the number of observed data points. Furthermore, by implementing it with local analysis, we reduce the memory requirements at each update step and mitigate the effects of the limited ensemble sizes. We test two local analysis approaches: one distance-based approach and one correlation-based approach. We apply these implementations to two synthetic time-lapse-seismic-data-assimilation cases, one 2D example, and one field-scale application that mimics some of the real-field challenges. We compare the results with reference solutions and with the known ensemble smoother with multiple data assimilation (ES-MDA) using Kalman gain distance-based localization. The results show that our method can efficiently assimilate time-lapse seismic data, leading to updated models that are comparable with other straightforward methods. The correlation-based local analysis approach provided results similar to the distance-based approach, with the advantage that the former can be applied to data and parameters that do not have specific spatial positions.

Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. M15-M31 ◽  
Author(s):  
Mingliang Liu ◽  
Dario Grana

We have developed a time-lapse seismic history matching framework to assimilate production data and time-lapse seismic data for the prediction of static reservoir models. An iterative data assimilation method, the ensemble smoother with multiple data assimilation is adopted to iteratively update an ensemble of reservoir models until their predicted observations match the actual production and seismic measurements and to quantify the model uncertainty of the posterior reservoir models. To address computational and numerical challenges when applying ensemble-based optimization methods on large seismic data volumes, we develop a deep representation learning method, namely, the deep convolutional autoencoder. Such a method is used to reduce the data dimensionality by sparsely and approximately representing the seismic data with a set of hidden features to capture the nonlinear and spatial correlations in the data space. Instead of using the entire seismic data set, which would require an extremely large number of models, the ensemble of reservoir models is iteratively updated by conditioning the reservoir realizations on the production data and the low-dimensional hidden features extracted from the seismic measurements. We test our methodology on two synthetic data sets: a simplified 2D reservoir used for method validation and a 3D application with multiple channelized reservoirs. The results indicate that the deep convolutional autoencoder is extremely efficient in sparsely representing the seismic data and that the reservoir models can be accurately updated according to production data and the reparameterized time-lapse seismic data.


2005 ◽  
Vol 11 (1) ◽  
pp. 9-17 ◽  
Author(s):  
H. Narfi Stefansson ◽  
Kevin W. Eliceiri ◽  
Charles F. Thomas ◽  
Amos Ron ◽  
Ron DeVore ◽  
...  

The use of multifocal-plane, time-lapse recordings of living specimens has allowed investigators to visualize dynamic events both within ensembles of cells and individual cells. Recordings of such four-dimensional (4D) data from digital optical sectioning microscopy produce very large data sets. We describe a wavelet-based data compression algorithm that capitalizes on the inherent redunancies within multidimensional data to achieve higher compression levels than can be obtained from single images. The algorithm will permit remote users to roam through large 4D data sets using communication channels of modest bandwidth at high speed. This will allow animation to be used as a powerful aid to visualizing dynamic changes in three-dimensional structures.


2019 ◽  
Vol 7 (3) ◽  
pp. SE123-SE130
Author(s):  
Yang Xue ◽  
Mariela Araujo ◽  
Jorge Lopez ◽  
Kanglin Wang ◽  
Gautam Kumar

Time-lapse (4D) seismic is widely deployed in offshore operations to monitor improved oil recovery methods including water flooding, yet its value for enhanced well and reservoir management is not fully realized due to the long cycle times required for quantitative 4D seismic data assimilation into dynamic reservoir models. To shorten the cycle, we have designed a simple inversion workflow to estimate reservoir property changes directly from 4D attribute maps using machine-learning (ML) methods. We generated tens of thousands of training samples by Monte Carlo sampling from the rock-physics model within reasonable ranges of the relevant parameters. Then, we applied ML methods to build the relationship between the reservoir property changes and the 4D attributes, and we used the learnings to estimate the reservoir property changes given the 4D attribute maps. The estimated reservoir property changes (e.g., water saturation changes) can be used to analyze injection efficiency, update dynamic reservoir models, and support reservoir management decisions. We can reduce the turnaround time from months to days, allowing early engagements with reservoir engineers to enhance integration. This accelerated data assimilation removes a deterrent for the acquisition of frequent 4D surveys.


Author(s):  
John A. Hunt

Spectrum-imaging is a useful technique for comparing different processing methods on very large data sets which are identical for each method. This paper is concerned with comparing methods of electron energy-loss spectroscopy (EELS) quantitative analysis on the Al-Li system. The spectrum-image analyzed here was obtained from an Al-10at%Li foil aged to produce δ' precipitates that can span the foil thickness. Two 1024 channel EELS spectra offset in energy by 1 eV were recorded and stored at each pixel in the 80x80 spectrum-image (25 Mbytes). An energy range of 39-89eV (20 channels/eV) are represented. During processing the spectra are either subtracted to create an artifact corrected difference spectrum, or the energy offset is numerically removed and the spectra are added to create a normal spectrum. The spectrum-images are processed into 2D floating-point images using methods and software described in [1].


Author(s):  
Thomas W. Shattuck ◽  
James R. Anderson ◽  
Neil W. Tindale ◽  
Peter R. Buseck

Individual particle analysis involves the study of tens of thousands of particles using automated scanning electron microscopy and elemental analysis by energy-dispersive, x-ray emission spectroscopy (EDS). EDS produces large data sets that must be analyzed using multi-variate statistical techniques. A complete study uses cluster analysis, discriminant analysis, and factor or principal components analysis (PCA). The three techniques are used in the study of particles sampled during the FeLine cruise to the mid-Pacific ocean in the summer of 1990. The mid-Pacific aerosol provides information on long range particle transport, iron deposition, sea salt ageing, and halogen chemistry.Aerosol particle data sets suffer from a number of difficulties for pattern recognition using cluster analysis. There is a great disparity in the number of observations per cluster and the range of the variables in each cluster. The variables are not normally distributed, they are subject to considerable experimental error, and many values are zero, because of finite detection limits. Many of the clusters show considerable overlap, because of natural variability, agglomeration, and chemical reactivity.


Author(s):  
Mykhajlo Klymash ◽  
Olena Hordiichuk — Bublivska ◽  
Ihor Tchaikovskyi ◽  
Oksana Urikova

In this article investigated the features of processing large arrays of information for distributed systems. A method of singular data decomposition is used to reduce the amount of data processed, eliminating redundancy. Dependencies of com­putational efficiency on distributed systems were obtained using the MPI messa­ging protocol and MapReduce node interaction software model. Were analyzed the effici­ency of the application of each technology for the processing of different sizes of data: Non — distributed systems are inefficient for large volumes of information due to low computing performance. It is proposed to use distributed systems that use the method of singular data decomposition, which will reduce the amount of information processed. The study of systems using the MPI protocol and MapReduce model obtained the dependence of the duration calculations time on the number of processes, which testify to the expediency of using distributed computing when processing large data sets. It is also found that distributed systems using MapReduce model work much more efficiently than MPI, especially with large amounts of data. MPI makes it possible to perform calculations more efficiently for small amounts of information. When increased the data sets, advisable to use the Map Reduce model.


2018 ◽  
Vol 2018 (6) ◽  
pp. 38-39
Author(s):  
Austa Parker ◽  
Yan Qu ◽  
David Hokanson ◽  
Jeff Soller ◽  
Eric Dickenson ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document