Using petrophysics and cross‐section balancing to interpret complex structure in a limited‐quality 3-D seismic image

Geophysics ◽  
1999 ◽  
Vol 64 (6) ◽  
pp. 1760-1773 ◽  
Author(s):  
Bob A. Hardage ◽  
Virginia M. Pendleton ◽  
R. P. Major ◽  
George B. Asquith ◽  
Dan Schultz‐Ela ◽  
...  

A study was done to characterize deep, prolific Ellenburger gas reservoirs at Lockridge, Waha, West Waha, and Worsham‐Bayer fields in Pecos, Ward, and Reeves counties in West Texas. A major effort of the study was to interpret a 176-mi2 3-D seismic data volume that spanned these fields. Well control defined the depth of the Ellenburger, the principal interpretation target, to be 17 000–21 000 ft (5200–6400 m) over the image area. Ellenburger reflection signals were weak because of these great target depths. Additionally, the top of the Ellenburger had a gentle, ramp‐like increase in acoustic impedance that did not produce a robust reflection event. A further negative influence on seismic data quality was the fact that a large portion of the 3-D seismic area was covered by a variable surface layer of low‐velocity Tertiary fill that was, in turn, underlain by a varying thickness of high‐velocity salt/anhydrite. These complicated near‐surface conditions attenuated seismic reflection signals and made static corrections of the data difficult. The combination of all these factors has caused many explorationists to consider this region of west Texas a no‐record seismic area for deep drilling targets. Although the 3-D seismic data aquired in this study produced good‐quality images throughout the post‐Mississippian section (down to ∼12 000 ft, or 3700 m), the images of the deep Ellenburger targets (∼20 000 ft, or 6100 m) were limited quality. The challenge was to use this limited‐quality 3-D image to interpret the structural configuration of the deep Ellenburger and the fault systems that traverse the area so that genetic relationship could be established between fault attributes and productive Ellenburger facies. Two techniques were used to produce a reliable structural interpretation of the 3-D seismic data. First, log data recorded in 60-plus wells within the 3-D image space were analyzed to determine where there was evidence of overturned and repeated units caused by thrusting and evidence of missing sections caused by normal faulting. These petrophysical analyses allowed reliable fault patterns and structural configurations to be build across 3-D seismic image zones that were difficult to interpret by conventional methods. Second, cross‐section balancing was done across the more complex structural regimes to determine if each interpreted surface that was used to define the postdeformation structure had a length consistent with the length of that same surface before deformation. The petrophysical analyses thus guided the structural interpretation of the 3-D seismic data by inferring the fault patterns that should be imposed on the limited‐quality image zones; the cross‐section balancing verified where this structural interpretation was reliable and where it needed to be adjusted. This interpretation methodology is offered here to benefit others who are confronted with the problem of interpreting complex structure from limited‐quality 3-D seismic images.

2018 ◽  
Vol 6 (4) ◽  
pp. T861-T872
Author(s):  
Mehrdad Soleimani ◽  
Hamid Aghajani ◽  
Saeed Heydari-Nejad

Defining the root zone of mud volcanoes (MVs), structural interpretation, and geologic modeling of their body is a problematic task when only seismic data are available. We have developed a strategy for integration of gravity and seismic data for better structural interpretation. Our strategy uses the concept of the normalized full gradient (NFG) for integration of gravity and seismic data to define geometry and the root zone of MVs in the southeast onshore of the Caspian Sea. Our strategy will increase the resolution of the seismic envelope compared with the conventional Hilbert transform. Prior to interpretation, we applied the NFG method on field gravity data. First, we perform a forward-modeling step for accurate NFG parameter definition. Second, we estimate the depth of the target, which is the root zone of the MV here. Interpretation of field gravity data by optimized NFG parameters indicates an accurate depth of the root zone. Subsequently, we apply the NFG method with optimized parameters on a 2D seismic data. Application of our strategy on seismic data will enhance resolution of the seismic image. The depth of the root zone and the geometry of the MV and mud flows was interpreted better on the enhanced image. It also illustrates the complex structure of a giant buried MV, which was not well-interpreted on conventional seismic image. Interpretation of the processed data reveals that the giant MV had lost its connection to its reservoir, whereas the other MV is still connected to the mud reservoir. The giant MV is composed of complex bodies due to pulses in the mud flows. Another MV in the section indicates narrow neck with anticline and listric normal faults on its top. Thus, application of the NFG concept on seismic image could be considered as an alternative to obtain enhanced seismic image for geologic interpretation.


2015 ◽  
Vol 3 (1) ◽  
pp. SB29-SB37 ◽  
Author(s):  
Bob A. Hardage

Structural interpretation of seismic data presents numerous opportunities for encountering interpretational pitfalls, particularly when a seismic image does not have an appropriate signal-to-noise ratio (S/N), or when a subsurface structure is unexpectedly complex. When both conditions exist — low S/N data and severe structural deformation — interpretation pitfalls are almost guaranteed. We analyzed an interpretation done 20 years ago that had to deal with poor seismic data quality and extreme distortion of strata. The lessons learned still apply today. Two things helped the interpretation team develop a viable structural model of the prospect. First, existing industry-accepted formation tops assigned to regional wells were rejected and new log interpretations were done to detect evidence of repeated sections and overturned strata. Second, the frequency content of the 3D seismic data volume was restricted to only the first octave of its seismic spectrum to create better evidence of fault geometries. A logical and workable structural interpretation resulted when these two action steps were taken. To the knowledge of our interpretation team, neither of these approaches had been attempted in the area at the time of this work (early 1990s). We found two pitfalls that may be encountered by other interpreters. The first pitfall was the hazard of accepting long-standing, industry-accepted definitions of the positions of formation tops on well logs. This nonquestioning acceptance of certain log signatures as indications of targeted formation tops led to a serious misinterpretation in our study. The second pitfall was the prevailing passion by geophysicists to create seismic data volumes that have the widest possible frequency spectrum. This interpretation effort showed that the opposite strategy was better at this site and for our data conditions; i.e., it was better to filter seismic images so that they contained only the lowest octave of frequencies in the seismic spectrum.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. N29-N40
Author(s):  
Modeste Irakarama ◽  
Paul Cupillard ◽  
Guillaume Caumon ◽  
Paul Sava ◽  
Jonathan Edwards

Structural interpretation of seismic images can be highly subjective, especially in complex geologic settings. A single seismic image will often support multiple geologically valid interpretations. However, it is usually difficult to determine which of those interpretations are more likely than others. We have referred to this problem as structural model appraisal. We have developed the use of misfit functions to rank and appraise multiple interpretations of a given seismic image. Given a set of possible interpretations, we compute synthetic data for each structural interpretation, and then we compare these synthetic data against observed seismic data; this allows us to assign a data-misfit value to each structural interpretation. Our aim is to find data-misfit functions that enable a ranking of interpretations. To do so, we formalize the problem of appraising structural interpretations using seismic data and we derive a set of conditions to be satisfied by the data-misfit function for a successful appraisal. We investigate vertical seismic profiling (VSP) and surface seismic configurations. An application of the proposed method to a realistic synthetic model shows promising results for appraising structural interpretations using VSP data, provided that the target region is well-illuminated. However, we find appraising structural interpretations using surface seismic data to be more challenging, mainly due to the difficulty of computing phase-shift data misfits.


2020 ◽  
Vol 39 (5) ◽  
pp. 324-331
Author(s):  
Gary Murphy ◽  
Vanessa Brown ◽  
Denes Vigh

As part of a wide-reaching full-waveform inversion (FWI) research program, FWI is applied to an onshore seismic data set collected in the Delaware Basin, west Texas. FWI is routinely applied on typical marine data sets with high signal-to-noise ratio (S/N), relatively good low-frequency content, and reasonably long offsets. Land seismic data sets, in comparison, present significant challenges for FWI due to low S/N, a dearth of low frequencies, and limited offsets. Recent advancements in FWI overcome limitations due to poor S/N and low frequencies making land FWI feasible to use to update the shallow velocities. The chosen area has contrasting and variable near-surface conditions providing an excellent test data set on which to demonstrate the workflow and its challenges. An acoustic FWI workflow is used to update the near-surface velocity model in order to improve the deeper image and simultaneously help highlight potential shallow drilling hazards.


Geophysics ◽  
2020 ◽  
Vol 85 (2) ◽  
pp. V169-V181 ◽  
Author(s):  
Daniele Colombo ◽  
Diego Rovetta ◽  
Ernesto Sandoval-Curiel ◽  
Apostolos Kontakis

We have developed a new framework for performing surface-consistent amplitude balancing and deconvolution of the near-surface attenuation response. Both approaches rely on the early arrival waveform of a seismic recording, which corresponds to the refracted or, more generally speaking, to the transmitted energy from a seismic source. The method adapts standard surface-consistent amplitude compensation and deconvolution to the domain of refracted/transmitted waves. A sorting domain specific for refracted energy is extended to the analysis of amplitude ratios of each trace versus a reference average trace to identify amplitude residuals that are inverted for surface consistency. The residual values are either calculated as a single scalar value for each trace or as a function of frequency to build a surface-consistent deconvolution operator. The derived operators are then applied to the data to obtain scalar amplitude balancing or amplitude balancing with spectral shaping. The derivation of the operators around the transmitted early arrival waveforms allows for deterministically decoupling the near-surface attenuation response from the remaining seismic data. The developed method is fully automatic and does not require preprocessing of the data. As such, it qualifies as a standard preprocessing tool to be applied at the early stages of seismic processing. Applications of the developed method are provided for a case in a complex, structure-controlled wadi, for a seismic time-lapse [Formula: see text] land monitoring case, and for an exploration area with high dunes and sabkhas producing large frequency-dependent anomalous amplitude responses. The new development provides an effective tool to enable better reservoir characterization and monitoring with land seismic data.


Geophysics ◽  
2014 ◽  
Vol 79 (6) ◽  
pp. B243-B252 ◽  
Author(s):  
Peter Bergmann ◽  
Artem Kashubin ◽  
Monika Ivandic ◽  
Stefan Lüth ◽  
Christopher Juhlin

A method for static correction of time-lapse differences in reflection arrival times of time-lapse prestack seismic data is presented. These arrival-time differences are typically caused by changes in the near-surface velocities between the acquisitions and had a detrimental impact on time-lapse seismic imaging. Trace-to-trace time shifts of the data sets from different vintages are determined by crosscorrelations. The time shifts are decomposed in a surface-consistent manner, which yields static corrections that tie the repeat data to the baseline data. Hence, this approach implies that new refraction static corrections for the repeat data sets are unnecessary. The approach is demonstrated on a 4D seismic data set from the Ketzin [Formula: see text] pilot storage site, Germany, and is compared with the result of an initial processing that was based on separate refraction static corrections. It is shown that the time-lapse difference static correction approach reduces 4D noise more effectively than separate refraction static corrections and is significantly less labor intensive.


2021 ◽  
Author(s):  
Ramy Elasrag ◽  
Thuraya Al Ghafri ◽  
Faaeza Al Katheer ◽  
Yousuf Al-Aufi ◽  
Ivica Mihaljevic ◽  
...  

Abstract Acquiring surface seismic data can be challenging in areas of intense human activities, due to presence of infrastructures (roads, houses, rigs), often leaving large gaps in the fold of coverage that can span over several kilometers. Modern interpolation algorithms can interpolate up to a certain extent, but quality of reconstructed seismic data diminishes as the acquisition gap increases. This is where vintage seismic acquisition can aid processing and imaging, especially if previous acquisition did not face the same surface obstacles. In this paper we will present how the legacy seismic survey has helped to fill in the data gaps of the new acquisition and produced improved seismic image. The new acquisition survey is part of the Mega 3D onshore effort undertaken by ADNOC, characterized by dense shot and receiver spacing with focus on full azimuth and broadband. Due to surface infrastructures, data could not be completely acquired leaving sizable gap in the target area. However, a legacy seismic acquisition undertaken in 2014 had access to such gap zones, as infrastructures were not present at the time. Legacy seismic data has been previously processed and imaged, however simple post-imaging merge would not be adequate as two datasets were processed using different workflows and imaging was done using different velocity models. In order to synchronize the two datasets, we have processed them in parallel. Data matching and merging were done before regularization. It has been regularized to radial geometry using 5D Matching Pursuit with Fourier Interpolation (MPFI). This has provided 12 well sampled azimuth sectors that went through surface consistent processing, multiple attenuation, and residual noise attenuation. Near surface model was built using data-driven image-based static (DIBS) while reflection tomography was used to build the anisotropic velocity model. Imaging was done using Pre-Stack Kirchhoff Depth Migration. Processing legacy survey from the beginning has helped to improve signal to noise ratio which assisted with data merging to not degrade the quality of the end image. Building one near surface model allowed both datasets to match well in time domain. Bringing datasets to the same level was an important condition before matching and merging. Amplitude and phase analysis have shown that both surveys are aligned quite well with minimal difference. Only the portion of the legacy survey that covers the gap was used in the regularization, allowing MPFI to reconstruct missing data. Regularized data went through surface multiple attenuation and further noise attenuation as preconditioning for migration. Final image that is created using both datasets has allowed target to be imaged better.


Author(s):  
S. H. Chen

Sn has been used extensively as an n-type dopant in GaAs grown by molecular-beam epitaxy (MBE). The surface accumulation of Sn during the growth of Sn-doped GaAs has been observed by several investigators. It is still not clear whether the accumulation of Sn is a kinetically hindered process, as proposed first by Wood and Joyce, or surface segregation due to thermodynamic factors. The proposed donor-incorporation mechanisms were based on experimental results from such techniques as secondary ion mass spectrometry, Auger electron spectroscopy, and C-V measurements. In the present study, electron microscopy was used in combination with cross-section specimen preparation. The information on the morphology and microstructure of the surface accumulation can be obtained in a fine scale and may confirm several suggestions from indirect experimental evidence in the previous studies.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Sign in / Sign up

Export Citation Format

Share Document