Geophysical basin modeling: Methodology and application in deepwater Gulf of Mexico

2015 ◽  
Vol 3 (3) ◽  
pp. SZ49-SZ58 ◽  
Author(s):  
Teresa Szydlik ◽  
Hans Kristian Helgesen ◽  
Ivar Brevik ◽  
Giuseppe De Prisco ◽  
Stephen Anthony Clark ◽  
...  

A truly integrated velocity model building method has been developed and applied for seismic imaging. Geophysical basin modeling is designed to mitigate seismic data limitations and constrains the velocity model building by taking advantage of information provided by geologic and geophysical input. The information from geologic concepts and understanding is quantified using basin model simulations to model primary control fields for rock properties, temperature, and effective stress. Transformation of the basin model fields to velocity is made by universally calibrated rock models. Applications show that high-quality seismic images are produced in areas of geologic complexity, where it is challenging to define these properties from seismic data alone. This multidisciplinary operation is of high value in exploration because it offers a significant reduction in the time and effort required to build a velocity model, while also improving the resulting image quality.

Geophysics ◽  
2007 ◽  
Vol 72 (4) ◽  
pp. P47-P56 ◽  
Author(s):  
Jesse Lomask ◽  
Robert G. Clapp ◽  
Biondo Biondi

Delineating salt boundaries is a necessary step in the velocity-model building process. The salt-delineation problem can be thought of as an image-segmentation problem. Normalized cuts image segmentation (NCIS) finds the cut (or cuts) that result in an image being broken into portions which have dissimilar, by some measure, characteristics. We apply a modified version of the NCIS method to partition seismic images along salt boundaries. NCIS can track boundaries that are not continuous, where conventional horizon-tracking algorithms may fail, by calculating a weight connecting each pixel in the image to every other pixel within a local neighborhood. The weights are determined using problem-dependent combinations of attributes, the most important being instantanteous amplitude and dip. The weights for the entire image are used to segment the image via an eigenvector calculation. The weight matrices for 3D seismic data cubes can be quite large and computationally expensive. By imposing bounds and by distributing the algorithm on a parallel cluster, we significantly increase efficiency. This method is demonstrated to be effective on a 3D field seismic data cube.


2021 ◽  
Author(s):  
Farah Syazana Dzulkefli ◽  
Kefeng Xin ◽  
Ahmad Riza Ghazali ◽  
Guo Qiang ◽  
Tariq Alkhalifah

Abstract Salt is known for having a generally low density and higher velocity compared with the surrounding rock layers which causes the energy to scatter once the seismic wavefield hits the salt body and relatively less energy is transmitted through the salt to the deeper subsurface. As a result, most of imaging approaches are unable to image the base of the salt and the reservoir below the salt. Even the velocity model building such as FWI often fails to illuminate the deeper parts of salt area. In this paper, we show that Full Wavefield Redatuming (FWR) is used to retrieved and enhance the seismic data below the salt area, leading to a better seismic image quality and allowing us to focus on updating the velocity in target area below the salt. However, this redatuming approach requires a good overburden velocity model to retrieved good redatumed data. Thus, by using synthetic SEAM model, our objective is to study on the accuracy of the overburden velocity model required for imaging beneath complex overburden. The results show that the kinematic components of wave propagation are preserved through redatuming even with heavily smoothed overburden velocity model.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. U139-U149
Author(s):  
Hongwei Liu ◽  
Mustafa Naser Al-Ali ◽  
Yi Luo

Seismic images can be viewed as photographs for underground rocks. These images can be generated from different reflections of elastic waves with different rock properties. Although the dominant seismic data processing is still based on the acoustic wave assumption, elastic wave processing and imaging have become increasingly popular in recent years. A major challenge in elastic wave processing is shear-wave (S-wave) velocity model building. For this reason, we have developed a sequence of procedures for estimating seismic S-wave velocities and the subsequent generation of seismic images using converted waves. We have two main essential new supporting techniques. The first technique is the decoupling of the S-wave information by generating common-focus-point gathers via application of the compressional-wave (P-wave) velocity on the converted seismic data. The second technique is to assume one common VP/ VS ratio to approximate two types of ratios, namely, the ratio of the average earth layer velocity and the ratio of the stacking velocity. The benefit is that we reduce two unknown ratios into one, so it can be easily scanned and picked in practice. The PS-wave images produced by this technology could be aligned with the PP-wave images such that both can be produced in the same coordinate system. The registration between the PP and PS images provides cross-validation of the migrated structures and a better estimation of underground rock and fluid properties. The S-wave velocity, computed from the picked optimal ratio, can be used not only for generating the PS-wave images, but also to ensure well registration between the converted-wave and P-wave images.


Geophysics ◽  
2021 ◽  
pp. 1-73
Author(s):  
Hani Alzahrani ◽  
Jeffrey Shragge

Data-driven artificial neural networks (ANNs) offer a number of advantages over conventional deterministic methods in a wide range of geophysical problems. For seismic velocity model building, judiciously trained ANNs offer the possibility of estimating high-resolution subsurface velocity models. However, a significant challenge of ANNs is training generalization, which is the ability of an ANN to apply the learning from the training process to test data not previously encountered. In the context of velocity model building, this means learning the relationship between velocity models and the corresponding seismic data from a set of training data, and then using acquired seismic data to accurately estimate unknown velocity models. We ask the following question: what type of velocity model structures need be included in the training process so that the trained ANN can invert seismic data from a different (hypothetical) geological setting? To address this question, we create four sets of training models: geologically inspired and purely geometrical, both with and without background velocity gradients. We find that using geologically inspired training data produce models with well-delineated layer interfaces and fewer intra-layer velocity variations. The absence of a certain geological structure in training models, though, hinders the ANN's ability to recover it in the testing data. We use purely geometric training models consisting of square blocks of varying size to demonstrate the ability of ANNs to recover reasonable approximations of flat, dipping, and curved interfaces. However, the predicted models suffer from intra-layer velocity variations and non-physical artifacts. Overall, the results successfully demonstrate the use of ANNs in recovering accurate velocity model estimates, and highlight the possibility of using such an approach for the generalized seismic velocity inversion problem.


2022 ◽  
Vol 41 (1) ◽  
pp. 9-18
Author(s):  
Andrew Brenders ◽  
Joe Dellinger ◽  
Imtiaz Ahmed ◽  
Esteban Díaz ◽  
Mariana Gherasim ◽  
...  

The promise of fully automatic full-waveform inversion (FWI) — a (seismic) data-driven velocity model building process — has proven elusive in complex geologic settings, with impactful examples using field data unavailable until recently. In 2015, success with FWI at the Atlantis Field in the U.S. Gulf of Mexico demonstrated that semiautomatic velocity model building is possible, but it also raised the question of what more might be possible if seismic data tailor-made for FWI were available (e.g., with increased source-receiver offsets and bespoke low-frequency seismic sources). Motivated by the initial value case for FWI in settings such as the Gulf of Mexico, beginning in 2007 and continuing into 2021 BP designed, built, and field tested Wolfspar, an ultralow-frequency seismic source designed to produce seismic data tailor-made for FWI. A 3D field trial of Wolfspar was conducted over the Mad Dog Field in the Gulf of Mexico in 2017–2018. Low-frequency source (LFS) data were shot on a sparse grid (280 m inline, 2 to 4 km crossline) and recorded into ocean-bottom nodes simultaneously with air gun sources shooting on a conventional dense grid (50 m inline, 50 m crossline). Using the LFS data with FWI to improve the velocity model for imaging produced only incremental uplift in the subsalt image of the reservoir, albeit with image improvements at depths greater than 25,000 ft (approximately 7620 m). To better understand this, reprocessing and further analyses were conducted. We found that (1) the LFS achieved its design signal-to-noise ratio (S/N) goals over its frequency range; (2) the wave-extrapolation and imaging operators built into FWI and migration are very effective at suppressing low-frequency noise, so that densely sampled air gun data with a low S/N can still produce useable model updates with low frequencies; and (3) data density becomes less important at wider offsets. These results may have significant implications for future acquisition designs with low-frequency seismic sources going forward.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. U65-U76 ◽  
Author(s):  
Tongning Yang ◽  
Jeffrey Shragge ◽  
Paul Sava

Image-domain wavefield tomography is a velocity model building technique using seismic images as the input and seismic wavefields as the information carrier. However, the method suffers from the uneven illumination problem when it applies a penalty operator to highlighting image inaccuracies due to the velocity model error. The uneven illumination caused by complex geology such as salt or by incomplete data creates defocusing in common-image gathers even when the migration velocity model is correct. This additional defocusing violates the wavefield tomography assumption stating that the migrated images are perfectly focused in the case of the correct model. Therefore, defocusing rising from illumination mixes with defocusing rising from the model errors and degrades the model reconstruction. We addressed this problem by incorporating the illumination effects into the penalty operator such that only the defocusing by model errors was used for model construction. This was done by first characterizing the illumination defocusing in gathers by illumination analysis. Then an illumination-based penalty was constructed that does not penalize the illumination defocusing. This method improved the robustness and effectiveness of image-domain wavefield tomography applied in areas characterized by poor illumination. Our tests on synthetic examples demonstrated that velocity models were more accurately reconstructed by our method using the illumination compensation, leading to a more accurate model and better subsurface images than those in the conventional approach without illumination compensation.


Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. R583-R599 ◽  
Author(s):  
Fangshu Yang ◽  
Jianwei Ma

Seismic velocity is one of the most important parameters used in seismic exploration. Accurate velocity models are the key prerequisites for reverse time migration and other high-resolution seismic imaging techniques. Such velocity information has traditionally been derived by tomography or full-waveform inversion (FWI), which are time consuming and computationally expensive, and they rely heavily on human interaction and quality control. We have investigated a novel method based on the supervised deep fully convolutional neural network for velocity-model building directly from raw seismograms. Unlike the conventional inversion method based on physical models, supervised deep-learning methods are based on big-data training rather than prior-knowledge assumptions. During the training stage, the network establishes a nonlinear projection from the multishot seismic data to the corresponding velocity models. During the prediction stage, the trained network can be used to estimate the velocity models from the new input seismic data. One key characteristic of the deep-learning method is that it can automatically extract multilayer useful features without the need for human-curated activities and an initial velocity setup. The data-driven method usually requires more time during the training stage, and actual predictions take less time, with only seconds needed. Therefore, the computational time of geophysical inversions, including real-time inversions, can be dramatically reduced once a good generalized network is built. By using numerical experiments on synthetic models, the promising performance of our proposed method is shown in comparison with conventional FWI even when the input data are in more realistic scenarios. We have also evaluated deep-learning methods, the training data set, the lack of low frequencies, and the advantages and disadvantages of our method.


Sign in / Sign up

Export Citation Format

Share Document