Q-model building using one-way wave-equation migration Q analysis — Part 2: 3D field-data test

Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. S111-S126 ◽  
Author(s):  
Yi Shen ◽  
Biondo Biondi ◽  
Robert Clapp

We previously evaluated an inversion-based method, wave-equation migration [Formula: see text] analysis (WEMQA), to estimate the quality factor [Formula: see text] model for seismic attenuation. To demonstrate the feasibility of this method, we applied this method to a 3D seismic data set acquired in the North Sea. Attenuation problems caused by a shallow gas and a shallow channel are observed in this field. We aim to characterize these attenuation anomalies. These attenuation anomalies are correlated with low interval velocities. The provided velocity model does not accurately reflect the low-velocity anomalies. Therefore, we first applied wave-equation migration velocity analysis to update the provided velocity model. The updated velocity shows low-velocity regions around the gas and channel features. The subsurface angle gathers migrated using the updated velocity model are flatter, and the events in the migrated images after velocity updating are more coherent. Then, we applied WEMQA [Formula: see text] to invert for the [Formula: see text] model. The inverted [Formula: see text] model detects the shape and location of the gas and channel. Consequently, the migration with the estimated [Formula: see text] anomalies enhances the damped amplitudes and the frequency content of the migrated events corrects the distorted phase of the migrated events and makes them more coherent.

Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. S93-S109 ◽  
Author(s):  
Yi Shen ◽  
Biondo Biondi ◽  
Robert Clapp

The goal of this study is to understand and quantify the attenuation effects in the subsurface and to create an accurate laterally and vertically varying quality factor [Formula: see text] model for gas clouds/pockets. Such [Formula: see text] models are used in seismic migration to improve image quality. We evaluate an inversion-based method, wave-equation migration [Formula: see text] analysis, with two major features. First, this method is performed in the image space to reduce noise, focus events, and provide a direct link between [Formula: see text] model perturbations and image perturbations. Second, this method uses wave-equation-based [Formula: see text] tomography to handle the complex wave propagation. We apply this method to three 2D synthetic examples. The numerical synthetic tests of this [Formula: see text] estimation method demonstrate its feasibility on characterizing [Formula: see text] anomalies and, as a consequence, improving the seismic image.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. WB27-WB39 ◽  
Author(s):  
Zheng-Zheng Zhou ◽  
Michael Howard ◽  
Cheryl Mifflin

Various reverse time migration (RTM) angle gather generation techniques have been developed to address poor subsalt data quality and multiarrival induced problems in gathers from Kirchhoff migration. But these techniques introduce new problems, such as inaccuracies in 2D subsurface angle gathers and edge diffraction artifacts in 3D subsurface angle gathers. The unique rich-azimuth data set acquired over the Shenzi field in the Gulf of Mexico enabled the generally artifact-free generation of 3D subsurface angle gathers. Using this data set, we carried out suprasalt tomography and salt model building steps and then produced 3D angle gathers to update the subsalt velocity. We used tilted transverse isotropy RTM with extended image condition to generate full 3D subsurface offset domain common image gathers, which were subsequently converted to 3D angle gathers. The angle gathers were substacked along the subsurface azimuth axis into azimuth sectors. Residual moveout analysis was carried out, and ray-based tomography was used to update velocities. The updated velocity model resulted in improved imaging of the subsalt section. We also applied residual moveout and selective stacking to 3D angle gathers from the final migration to produce an optimized stack image.


1992 ◽  
Vol 63 (3) ◽  
pp. 375-393 ◽  
Author(s):  
J.M. Chiu ◽  
A.C. Johnston ◽  
Y.T. Yang

Abstract More than 700 earthquakes have been located in the central New Madrid seismic zone during a two-year deployment of the PANDA array. Magnitudes range from < 0.0 to the mblg 4.6 Risco, Missouri earthquake of 4 May 1991. The entire data set is digital, three-component and on-scale. These data were inverted to obtain a new shallow crustal velocity model of the upper Mississippi embayment for both P- and S-waves. Initially, inversion convergence was hindered by extreme velocity contrasts between the soft, low-velocity surficial alluvial sediments and the underlying Paleozoic carbonate and clastic high-velocity rock. However, constraints from extensive well log data for the embayment, secondary phases (Sp and Ps), and abundant, high-quality shear-wave data have yielded a relatively robust inversion. This in turn has led to a hypocentral data set of unprecedented quality for the central New Madrid seismic zone. Contrary to previous studies that utilized more restricted data, the PANDA data clearly delineate planar concentrations of hypocenters that compel an interpretation as active faults. Our results corroborate the vertical (strike-slip) faulting of the the southwest (axial), north-northeast, and western arms and define two new dipping planes in the central segment. The seismicity of the left-step zone between the NE-trending vertical segments is concentrated about a plane that dips at ∼31°SW; a separate zone to the SE of the axial zone defines a plane that dips at ∼48°SW. The reason for this difference in dip, possibly defining segmentation of an active fault, is not dear. When these planes are projected up dip, they intersect the surface along the eastern boundary of the Lake County uplift (LCU) and the western portion of Reelfoot Lake. If these SW-dipping planes are thrust faults, then the LCU would be on the upthrown hanging wall and Reelfoot Lake on the downthrown footwall. If in turn these inferred thrust faults were involved in the 1811–12 and/or pre-1811 large earthquakes, they provide an internally consistent explanation for (1) the existence and location of the LCU, (2) the wide-to-the-north, narrow-to-the-south shape of the LCU, and (3) the subsidence and/or impoundment of Reelfoot Lake.


Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


2019 ◽  
Vol 38 (7) ◽  
pp. 556-557
Author(s):  
Yi Shen ◽  
Kui Bao ◽  
Doug Foster ◽  
Dhananjay Kumar ◽  
Kris Innanen ◽  
...  

A one-day postconvention workshop held during the 2018 SEG Annual Meeting in Anaheim, California, focused on seismic attenuation model building and compensation through imaging in the morning and on frequency-dependent seismic interpretation and rock physics in the afternoon. The workshop was organized by Dhananjay Kumar (BP), Yi Shen (Shell), Kui Bao (Shell), Mark Chapman (University of Edinburgh), Doug Foster (The University of Texas at Austin), Wenyi Hu (Advanced Geophysical Tech Inc.), and Tieyuan Zhu (Pennsylvania State University). The main topics discussed were: attenuation and Q model building using seismic, vertical seismic profiling, well-log and core data, seismic attenuation compensation, rock-physics modeling, seismic modeling, and frequency-dependent seismic interpretation.


2019 ◽  
Vol 38 (11) ◽  
pp. 872a1-872a9 ◽  
Author(s):  
Mauricio Araya-Polo ◽  
Stuart Farris ◽  
Manuel Florez

Exploration seismic data are heavily manipulated before human interpreters are able to extract meaningful information regarding subsurface structures. This manipulation adds modeling and human biases and is limited by methodological shortcomings. Alternatively, using seismic data directly is becoming possible thanks to deep learning (DL) techniques. A DL-based workflow is introduced that uses analog velocity models and realistic raw seismic waveforms as input and produces subsurface velocity models as output. When insufficient data are used for training, DL algorithms tend to overfit or fail. Gathering large amounts of labeled and standardized seismic data sets is not straightforward. This shortage of quality data is addressed by building a generative adversarial network (GAN) to augment the original training data set, which is then used by DL-driven seismic tomography as input. The DL tomographic operator predicts velocity models with high statistical and structural accuracy after being trained with GAN-generated velocity models. Beyond the field of exploration geophysics, the use of machine learning in earth science is challenged by the lack of labeled data or properly interpreted ground truth, since we seldom know what truly exists beneath the earth's surface. The unsupervised approach (using GANs to generate labeled data)illustrates a way to mitigate this problem and opens geology, geophysics, and planetary sciences to more DL applications.


2019 ◽  
Vol 7 (3) ◽  
pp. SE113-SE122 ◽  
Author(s):  
Yunzhi Shi ◽  
Xinming Wu ◽  
Sergey Fomel

Salt boundary interpretation is important for the understanding of salt tectonics and velocity model building for seismic migration. Conventional methods consist of computing salt attributes and extracting salt boundaries. We have formulated the problem as 3D image segmentation and evaluated an efficient approach based on deep convolutional neural networks (CNNs) with an encoder-decoder architecture. To train the model, we design a data generator that extracts randomly positioned subvolumes from large-scale 3D training data set followed by data augmentation, then feed a large number of subvolumes into the network while using salt/nonsalt binary labels generated by thresholding the velocity model as ground truth labels. We test the model on validation data sets and compare the blind test predictions with the ground truth. Our results indicate that our method is capable of automatically capturing subtle salt features from the 3D seismic image with less or no need for manual input. We further test the model on a field example to indicate the generalization of this deep CNN method across different data sets.


Geophysics ◽  
2011 ◽  
Vol 76 (3) ◽  
pp. WA13-WA21 ◽  
Author(s):  
Mamoru Takanashi ◽  
Ilya Tsvankin

Nonhyperbolic moveout analysis plays an increasingly important role in velocity model building because it provides valuable information for anisotropic parameter estimation. However, lateral heterogeneity associated with stratigraphic lenses such as channels and reefs can significantly distort the moveout parameters, even when the structure is relatively simple. We analyze the influence of a low-velocity isotropic lens on nonhyperbolic moveout inversion for horizontally layered VTI (transversely isotropic with a vertical symmetry axis) models. Synthetic tests demonstrate that a lens can cause substantial, laterally varying errors in the normal-moveout velocity [Formula: see text] and the anellipticity parameter [Formula: see text]. The area influenced by the lens can be identified using the residual moveout after the nonhyperbolic moveout correction as well as the dependence of errors in [Formula: see text] and [Formula: see text] on spreadlength. To remove such errors in [Formula: see text] and [Formula: see text], we propose a correction algorithm designed for a lens embedded in a horizontally layered overburden. This algorithm involves estimation of the incidence angle of the ray passing through the lens for each recorded trace. With the assumption that lens-related perturbation of the raypath is negligible, the lens-induced traveltime shifts are computed from the corresponding zero-offset time distortion (i.e., from “pull-up” or “push-down” anomalies). Synthetic tests demonstrate that this algorithm substantially reduces the errors in the effective and interval parameters [Formula: see text] and [Formula: see text]. The corrected traces and reconstructed “background” values of [Formula: see text] and [Formula: see text] are suitable for anisotropic time imaging and producing a high-quality stack.


Geophysics ◽  
2010 ◽  
Vol 75 (2) ◽  
pp. S81-S93 ◽  
Author(s):  
Mikhail M. Popov ◽  
Nikolay M. Semtchenok ◽  
Peter M. Popov ◽  
Arie R. Verdel

Seismic depth migration aims to produce an image of seismic reflection interfaces. Ray methods are suitable for subsurface target-oriented imaging and are less costly compared to two-way wave-equation-based migration, but break down in cases when a complex velocity structure gives rise to the appearance of caustics. Ray methods also have difficulties in correctly handling the different branches of the wavefront that result from wave propagation through a caustic. On the other hand, migration methods based on the two-way wave equation, referred to as reverse-time migration, are known to be capable of dealing with these problems. However, they are very expensive, especially in the 3D case. It can be prohibitive if many iterations are needed, such as for velocity-model building. Our method relies on the calculation of the Green functions for the classical wave equation by per-forming a summation of Gaussian beams for the direct and back-propagated wavefields. The subsurface image is obtained by cal-culating the coherence between the direct and backpropagated wavefields. To a large extent, our method combines the advantages of the high computational speed of ray-based migration with the high accuracy of reverse-time wave-equation migration because it can overcome problems with caustics, handle all arrivals, yield good images of steep flanks, and is readily extendible to target-oriented implementation. We have demonstrated the quality of our method with several state-of-the-art benchmark subsurface models, which have velocity variations up to a high degree of complexity. Our algorithm is especially suited for efficient imaging of selected subsurface subdomains, which is a large advantage particularly for 3D imaging and velocity-model refinement applications such as subsalt velocity-model improvement. Because our method is also capable of providing highly accurate migration results in structurally complex subsurface settings, we have also included the concept of true-amplitude imaging in our migration technique.


Sign in / Sign up

Export Citation Format

Share Document