Robust deep learning seismic inversion with a priori initial model constraint

Author(s):  
Jian Zhang ◽  
Jingye Li ◽  
Xiaohong Chen ◽  
Yuanqiang Li ◽  
Guangtan Huang ◽  
...  

Summary Seismic inversion is one of the most commonly used methods in the oil and gas industry for reservoir characterization from observed seismic data. Deep learning (DL) is emerging as a data-driven approach that can effectively solve the inverse problem. However, existing deep learning-based methods for seismic inversion utilize only seismic data as input, which often leads to poor stability of the inversion results. Besides, it has always been challenging to train a robust network since the real survey has limited labeled data pairs. To partially overcome these issues, we develop a neural network framework with a priori initial model constraint to perform seismic inversion. Our network uses two parts as one input for training. One is the seismic data, and the other is the subsurface background model. The labels for each input are the actual model. The proposed method is performed by log-to-log strategy. The training dataset is firstly generated based on forward modeling. The network is then pre-trained using the synthetic training dataset, which is further validated using synthetic data that has not been used in the training step. After obtaining the pre-trained network, we introduce the transfer learning strategy to fine-tune the pre-trained network using labeled data pairs from a real survey to acquire better inversion results in the real survey. The validity of the proposed framework is demonstrated using synthetic 2D data including both post-stack and pre-stack examples, as well as a real 3D post-stack seismic data set from the western Canadian sedimentary basin.

Geophysics ◽  
2021 ◽  
pp. 1-64
Author(s):  
Jian Sun ◽  
Kristopher A. Innanen ◽  
Chao Huang

The determination of subsurface elastic property models is crucial in quantitative seismic data processing and interpretation. This problem is commonly solved by deterministic physical methods, such as tomography or full-waveform inversion. However, these methods are entirely local and require accurate initial models. Deep learning represents a plausible class of methods for seismic inversion, which may avoid some of the issues of purely descent-based approaches. However, any generic deep learning network capable of relating each elastic property cell value to each sample in a seismic data set would require a very large number of degrees of freedom. Two approaches might be taken to train such a network: first, by invoking a massive and exhaustive training data set and, second, by working to reduce the degrees of freedom by enforcing physical constraints on the model-data relationship. The second approach is referred to as “physics-guiding.” Based on recent progress in wave theory-designed (i.e., physics-based) networks, we have developed a hybrid network design, involving deterministic, physics-based modeling and data-driven deep learning components. From an optimization standpoint, a data-driven model misfit (i.e., standard deep learning) and now a physics-guided data residual (i.e., a wave propagation network) are simultaneously minimized during the training of the network. An experiment is carried out to analyze the trade-off between two types of losses. Synthetic velocity building is used to examine the potential of hybrid training. Comparisons demonstrate that, given the same training data set, the hybrid-trained network outperforms the traditional fully data-driven network. In addition, we performed a comprehensive error analysis to quantitatively compare the fully data-driven and hybrid physics-guided approaches. The network is applied to the SEG salt model data, and the uncertainty is analyzed, to further examine the benefits of hybrid training.


Geophysics ◽  
2021 ◽  
pp. 1-63
Author(s):  
Wenqian Fang ◽  
Lihua Fu ◽  
Shaoyong Liu ◽  
Hongwei Li

Deep learning (DL) technology has emerged as a new approach for seismic data interpolation. DL-based methods can automatically learn the mapping between regularly subsampled and complete data from a large training dataset. Subsequently, the trained network can be used to directly interpolate new data. Therefore, compared with traditional methods, DL-based methods reduce the manual workload and render the interpolation process efficient and automatic by avoiding the selection of hyperparameters. However, two limitations of DL-based approaches exist. First, the generalization performance of the neural network is inadequate when processing new data with a different structure compared to the training data. Second, the interpretation of the trained networks is very difficult. To overcome these limitations, we combine the deep neural network and classic prediction-error filter methods, proposing a novel seismic data de-aliased interpolation framework termed PEFNet (Prediction-Error Filters Network). The PEFNet designs convolutional neural networks to learn the relationship between the subsampled data and the prediction-error filters. Thus, the filters estimated by the trained network are used for the recovery of missing traces. The learning of filters enables the network to better extract the local dip of seismic data and has a good generalization ability. In addition, PEFNet has the same interpretability as traditional prediction error-filter based methods. The applicability and the effectiveness of the proposed method are demonstrated here by synthetic and field data examples.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. MR187-MR198 ◽  
Author(s):  
Yi Shen ◽  
Jack Dvorkin ◽  
Yunyue Li

Our goal is to accurately estimate attenuation from seismic data using model regularization in the seismic inversion workflow. One way to achieve this goal is by finding an analytical relation linking [Formula: see text] to [Formula: see text]. We derive an approximate closed-form solution relating [Formula: see text] to [Formula: see text] using rock-physics modeling. This relation is tested on well data from a clean clastic gas reservoir, of which the [Formula: see text] values are computed from the log data. Next, we create a 2D synthetic gas-reservoir section populated with [Formula: see text] and [Formula: see text] and generate respective synthetic seismograms. Now, the goal is to invert this synthetic seismic section for [Formula: see text]. If we use standard seismic inversion based solely on seismic data, the inverted attenuation model has low resolution and incorrect positioning, and it is distorted. However, adding our relation between velocity and attenuation, we obtain an attenuation model very close to the original section. This method is tested on a 2D field seismic data set from Gulf of Mexico. The resulting [Formula: see text] model matches the geologic shape of an absorption body interpreted from the seismic section. Using this [Formula: see text] model in seismic migration, we make the seismic events below the high-absorption layer clearly visible, with improved frequency content and coherency of the events.


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. M1-M10 ◽  
Author(s):  
Leonardo Azevedo ◽  
Ruben Nunes ◽  
Pedro Correia ◽  
Amílcar Soares ◽  
Luis Guerreiro ◽  
...  

Due to the nature of seismic inversion problems, there are multiple possible solutions that can equally fit the observed seismic data while diverging from the real subsurface model. Consequently, it is important to assess how inverse-impedance models are converging toward the real subsurface model. For this purpose, we evaluated a new methodology to combine the multidimensional scaling (MDS) technique with an iterative geostatistical elastic seismic inversion algorithm. The geostatistical inversion algorithm inverted partial angle stacks directly for acoustic and elastic impedance (AI and EI) models. It was based on a genetic algorithm in which the model perturbation at each iteration was performed recurring to stochastic sequential simulation. To assess the reliability and convergence of the inverted models at each step, the simulated models can be projected in a metric space computed by MDS. This projection allowed distinguishing similar from variable models and assessing the convergence of inverted models toward the real impedance ones. The geostatistical inversion results of a synthetic data set, in which the real AI and EI models are known, were plotted in this metric space along with the known impedance models. We applied the same principle to a real data set using a cross-validation technique. These examples revealed that the MDS is a valuable tool to evaluate the convergence of the inverse methodology and the impedance model variability among each iteration of the inversion process. Particularly for the geostatistical inversion algorithm we evaluated, it retrieves reliable impedance models while still producing a set of simulated models with considerable variability.


2016 ◽  
Vol 4 (4) ◽  
pp. T577-T589 ◽  
Author(s):  
Haitham Hamid ◽  
Adam Pidlisecky

In complex geology, the presence of highly dipping structures can complicate impedance inversion. We have developed a structurally constrained inversion in which a computationally well-behaved objective function is minimized subject to structural constraints. This approach allows the objective function to incorporate structural orientation in the form of dips into our inversion algorithm. Our method involves a multitrace impedance inversion and a rotation of an orthogonal system of derivative operators. Local dips used to constrain the derivative operators were estimated from migrated seismic data. In addition to imposing structural constraints on the inversion model, this algorithm allows for the inclusion of a priori knowledge from boreholes. We investigated this algorithm on a complex synthetic 2D model as well as a seismic field data set. We compared the result obtained with this approach with the results from single trace-based inversion and laterally constrained inversion. The inversion carried out using dip information produces a model that has higher resolution that is more geologically realistic compared with other methods.


2021 ◽  
pp. 1-97
Author(s):  
Lingxiao Jia ◽  
Subhashis Mallick ◽  
Cheng Wang

The choice of an initial model for seismic waveform inversion is important. In matured exploration areas with adequate well control, we can generate a suitable initial model using well information. However, in new areas where well control is sparse or unavailable, such an initial model is compromised and/or biased by the regions with more well controls. Even in matured exploration areas, if we use time-lapse seismic data to predict dynamic reservoir properties, an initial model, that we obtain from the existing preproduction wells could be incorrect. In this work, we outline a new methodology and workflow for a nonlinear prestack isotropic elastic waveform inversion. We call this method a data driven inversion, meaning that we derive the initial model entirely from the seismic data without using any well information. By assuming a locally horizonal stratification for every common midpoint and starting from the interval P-wave velocity, estimated entirely from seismic data, our method generates pseudo wells by running a two-pass one-dimensional isotropic elastic prestack waveform inversion that uses the reflectivity method for forward modeling and genetic algorithm for optimization. We then use the estimated pseudo wells to build the initial model for seismic inversion. By applying this methodology to real seismic data from two different geological settings, we demonstrate the usefulness of our method. We believe that our new method is potentially applicable for subsurface characterization in areas where well information is sparse or unavailable. Additional research is however necessary to improve the compute-efficiency of the methodology.


Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCC91-WCC103 ◽  
Author(s):  
Christophe Barnes ◽  
Marwan Charara

Marine reflection seismic data inversion is a compute-intensive process, especially in three dimensions. Approximations often are made to limit the number of physical parameters we invert for, or to speed up the forward modeling. Because the data often are dominated by unconverted P-waves, one popular approximation is to consider the earth as purely acoustic, i.e., no shear modulus. The material density sometimes is taken as a constant. Nonlinear waveform seismic inversion consists of iteratively minimizing the misfit between the amplitudes of the measured and the modeled data. Approximations, such as assuming an acoustic medium, lead to incorrect modeling of the amplitudes of the seismic waves, especially with respect to amplitude variation with offset (AVO), and therefore have a direct impact on the inversion results. For evaluation purposes, we have performed a series of inversions with different approximations and different constraints whereby the synthetic data set to recover is computed for a 1D elastic medium. A series of numerical experiments, although simple, help to define the applicability domain of the acoustic assumption. Acoustic full-wave inversion is applicable only when the S-wave velocity and the density fields are smooth enough to reduce the AVO effect, or when the near-offset seismograms are inverted with a good starting model. However, in many realistic cases, acoustic approximation penalizes the full-wave inversion of marine reflection seismic data in retrieving the acoustic parameters.


2002 ◽  
Vol 14 (1) ◽  
pp. 21-41 ◽  
Author(s):  
Marco Saerens ◽  
Patrice Latinne ◽  
Christine Decaestecker

It sometimes happens (for instance in case control studies) that a classifier is trained on a data set that does not reflect the true a priori probabilities of the target classes on real-world data. This may have a negative effect on the classification accuracy obtained on the real-world data set, especially when the classifier's decisions are based on the a posteriori probabilities of class membership. Indeed, in this case, the trained classifier provides estimates of the a posteriori probabilities that are not valid for this real-world data set (they rely on the a priori probabilities of the training set). Applying the classifier as is (without correcting its outputs with respect to these new conditions) on this new data set may thus be suboptimal. In this note, we present a simple iterative procedure for adjusting the outputs of the trained classifier with respect to these new a priori probabilities without having to refit the model, even when these probabilities are not known in advance. As a by-product, estimates of the new a priori probabilities are also obtained. This iterative algorithm is a straightforward instance of the expectation-maximization (EM) algorithm and is shown to maximize the likelihood of the new data. Thereafter, we discuss a statistical test that can be applied to decide if the a priori class probabilities have changed from the training set to the real-world data. The procedure is illustrated on different classification problems involving a multilayer neural network, and comparisons with a standard procedure for a priori probability estimation are provided. Our original method, based on the EM algorithm, is shown to be superior to the standard one for a priori probability estimation. Experimental results also indicate that the classifier with adjusted outputs always performs better than the original one in terms of classification accuracy, when the a priori probability conditions differ from the training set to the real-world data. The gain in classification accuracy can be significant.


This Research proposal addresses the issues of dimension reduction algorithms in Deep Learning(DL) for Hyperspectral Imaging (HSI) classification, to reduce the size of training dataset and for feature extraction ICA(Independent Component Analysis) are adopted. The proposed algorithm evaluated uses real HSI data set. It shows that ICA gives the most optimistic presentation it shrinks off the feature occupying a small portion of all pixels distinguished from the noisy bands based on non Gaussian assumption of independent sources. In turn, finding the independent components to address the challenge. A new approach DL based method is adopted, that has greater attention in the research field of HSI. DL based method is evaluated by a sequence prediction architecture that includes a recurrent neural network the LSTM architecture. It includes CNN layers for feature extraction of input datasets that have better accuracy with minimum computational cost


Sign in / Sign up

Export Citation Format

Share Document