SaltNet: A production-scale deep learning pipeline for automated salt model building

2020 ◽  
Vol 39 (3) ◽  
pp. 195-203 ◽  
Author(s):  
Satyakee Sen ◽  
Sribharath Kainkaryam ◽  
Cen Ong ◽  
Arvind Sharma

One of the most important steps in velocity model building for seismic imaging in salt basins such as the Gulf of Mexico is the iterative refinement of the salt geometry. Traditionally, this step is difficult to automate, and production workflows require extensive domain expert intervention to accurately interpret the salt bodies on images migrated with an incorrect intermediate velocity model. To alleviate this problem, we propose an end-to-end semisupervised deep learning pipeline, SaltNet, capable of fully automated salt interpretation during initial model building iterations. We show that the method can be used to build the initial salt model (top of salt-1 and base of salt-1 or salt body-1 iterations) without domain expert intervention while achieving accuracy close to that of a human expert. Unlike existing convolutional neural network (CNN)-based salt interpretation applications, this method is designed to work on noisy low-resolution real-data seismic images that are typically encountered during the initial model building stage. It is also generalizable to migrated images from previously unseen surveys. This is achieved by training a suite of deep high-capacity CNN models with a multiview semisupervised learning scheme that leverages data and model distillation concepts to make these models robust to potentially large domain differences that images from a new target survey may exhibit. Consequently, CNN models achieve human-level interpretation accuracy on such new surveys without the need to manually interpret any portion of the target survey. Results from a field test on a Gulf of Mexico survey show excellent agreement between migrated images generated by the conventional interpreter-picked and SaltNet-picked initial salt model.

2019 ◽  
Vol 7 (4) ◽  
pp. T911-T922
Author(s):  
Satyakee Sen ◽  
Sribharath Kainkaryam ◽  
Cen Ong ◽  
Arvind Sharma

Salt model building has long been considered a severe bottleneck for large-scale 3D seismic imaging projects. It is one of the most time-consuming, labor-intensive, and difficult-to-automate processes in the entire depth imaging workflow requiring significant intervention by domain experts to manually interpret the salt bodies on noisy, low-frequency, and low-resolution seismic images at each iteration of the salt model building process. The difficulty and need for automating this task is well-recognized by the imaging community and has propelled the use of deep-learning-based convolutional neural network (CNN) architectures to carry out this task. However, significant challenges remain for reliable production-scale deployment of CNN-based methods for salt model building. This is mainly due to the poor generalization capabilities of these networks. When used on new surveys, never seen by the CNN models during the training stage, the interpretation accuracy of these models drops significantly. To remediate this key problem, we have introduced a U-shaped encoder-decoder type CNN architecture trained using a specialized regularization strategy aimed at reducing the generalization error of the network. Our regularization scheme perturbs the ground truth labels in the training set. Two different perturbations are discussed: one that randomly changes the labels of the training set, flipping salt labels to sediments and vice versa and the second that smooths the labels. We have determined that such perturbations act as a strong regularizer preventing the network from making highly confident predictions on the training set and thus reducing overfitting. An ensemble strategy is also used for test time augmentation that is shown to further improve the accuracy. The robustness of our CNN models, in terms of reduced generalization error and improved interpretation accuracy is demonstrated with real data examples from the Gulf of Mexico.


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


Geophysics ◽  
1998 ◽  
Vol 63 (2) ◽  
pp. 546-556 ◽  
Author(s):  
Herman Chang ◽  
John P. VanDyke ◽  
Marcelo Solano ◽  
George A. McMechan ◽  
Duryodhan Epili

Portable, production‐scale 3-D prestack Kirchhoff depth migration software capable of full‐volume imaging has been successfully implemented and applied to a six‐million trace (46.9 Gbyte) marine data set from a salt/subsalt play in the Gulf of Mexico. Velocity model building and updates use an image‐driven strategy and were performed in a Sun Sparc environment. Images obtained by 3-D prestack migration after three velocity iterations are substantially better focused and reveal drilling targets that were not visible in images obtained from conventional 3-D poststack time migration. Amplitudes are well preserved, so anomalies associated with known reservoirs conform to the petrophysical predictions. Prototype development was on an 8-node Intel iPSC860 computer; the production version was run on an 1824-node Intel Paragon computer. The code has been successfully ported to CRAY (T3D) and Unix workstation (PVM) environments.


2016 ◽  
Author(s):  
Nathaniel Cockrell ◽  
Khaled Abdelaziz ◽  
Kun Jiao ◽  
Adrian Montgomery ◽  
David Dangle ◽  
...  

Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. U109-U119
Author(s):  
Pengyu Yuan ◽  
Shirui Wang ◽  
Wenyi Hu ◽  
Xuqing Wu ◽  
Jiefu Chen ◽  
...  

A deep-learning-based workflow is proposed in this paper to solve the first-arrival picking problem for near-surface velocity model building. Traditional methods, such as the short-term average/long-term average method, perform poorly when the signal-to-noise ratio is low or near-surface geologic structures are complex. This challenging task is formulated as a segmentation problem accompanied by a novel postprocessing approach to identify pickings along the segmentation boundary. The workflow includes three parts: a deep U-net for segmentation, a recurrent neural network (RNN) for picking, and a weight adaptation approach to be generalized for new data sets. In particular, we have evaluated the importance of selecting a proper loss function for training the network. Instead of taking an end-to-end approach to solve the picking problem, we emphasize the performance gain obtained by using an RNN to optimize the picking. Finally, we adopt a simple transfer learning scheme and test its robustness via a weight adaptation approach to maintain the picking performance on new data sets. Our tests on synthetic data sets reveal the advantage of our workflow compared with existing deep-learning methods that focus only on segmentation performance. Our tests on field data sets illustrate that a good postprocessing picking step is essential for correcting the segmentation errors and that the overall workflow is efficient in minimizing human interventions for the first-arrival picking task.


2022 ◽  
Vol 41 (1) ◽  
pp. 9-18
Author(s):  
Andrew Brenders ◽  
Joe Dellinger ◽  
Imtiaz Ahmed ◽  
Esteban Díaz ◽  
Mariana Gherasim ◽  
...  

The promise of fully automatic full-waveform inversion (FWI) — a (seismic) data-driven velocity model building process — has proven elusive in complex geologic settings, with impactful examples using field data unavailable until recently. In 2015, success with FWI at the Atlantis Field in the U.S. Gulf of Mexico demonstrated that semiautomatic velocity model building is possible, but it also raised the question of what more might be possible if seismic data tailor-made for FWI were available (e.g., with increased source-receiver offsets and bespoke low-frequency seismic sources). Motivated by the initial value case for FWI in settings such as the Gulf of Mexico, beginning in 2007 and continuing into 2021 BP designed, built, and field tested Wolfspar, an ultralow-frequency seismic source designed to produce seismic data tailor-made for FWI. A 3D field trial of Wolfspar was conducted over the Mad Dog Field in the Gulf of Mexico in 2017–2018. Low-frequency source (LFS) data were shot on a sparse grid (280 m inline, 2 to 4 km crossline) and recorded into ocean-bottom nodes simultaneously with air gun sources shooting on a conventional dense grid (50 m inline, 50 m crossline). Using the LFS data with FWI to improve the velocity model for imaging produced only incremental uplift in the subsalt image of the reservoir, albeit with image improvements at depths greater than 25,000 ft (approximately 7620 m). To better understand this, reprocessing and further analyses were conducted. We found that (1) the LFS achieved its design signal-to-noise ratio (S/N) goals over its frequency range; (2) the wave-extrapolation and imaging operators built into FWI and migration are very effective at suppressing low-frequency noise, so that densely sampled air gun data with a low S/N can still produce useable model updates with low frequencies; and (3) data density becomes less important at wider offsets. These results may have significant implications for future acquisition designs with low-frequency seismic sources going forward.


Sign in / Sign up

Export Citation Format

Share Document