Automated stacking of seismic reflection data based on nonrigid image matching

Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. V171-V183 ◽  
Author(s):  
Sönke Reiche ◽  
Benjamin Berkels

Stacking of multichannel seismic reflection data is a crucial step in seismic data processing, usually leading to the first interpretable seismic image. Stacking is preceded by traveltime correction, in which all events contained in a common-midpoint (CMP) gather are corrected for their offset-dependent traveltime increase. Such corrections are often based on the assumption of hyperbolic traveltime curves, and a best fit hyperbola is usually sought for each reflection by careful determination of stacking velocities. However, assuming hyperbolic traveltime curves is not accurate in many situations, e.g., in the case of strongly curved reflectors, large offset-to-target-ratios, or strong anisotropy. Here, we found that an underlying model parameterizing the shape of the traveltime curve is not a strict necessity for producing high-quality stacks. Based on nonrigid image-matching techniques, we developed an alternative way of stacking, both independent of a reference velocity model and any prior assumptions regarding the shape of the traveltime curve. Mathematically, our stacking operator is based on a variational approach that transforms a series of seismic traces contained within a CMP gather into a common reference frame. Based on the normalized crosscorrelation and regularized by penalizing irregular displacements, time shifts are sought for each sample to minimize the discrepancy between a zero-offset trace and traces with larger offsets. Time shifts are subsequently exported as a data attribute and can easily be converted to stacking velocities. To demonstrate the feasibility of this approach, we apply it to simple and complex synthetic data and finally to a real seismic line. We find that our new method produces stacks of equal quality and velocity models of slightly better quality compared with an automated, hyperbolic traveltime correction and stacking approach for complex synthetic and real data cases.

Geophysics ◽  
2004 ◽  
Vol 69 (6) ◽  
pp. 1521-1529 ◽  
Author(s):  
Chris L. Hackert ◽  
Jorge O. Parra

Most methods for deriving Q from surface‐seismic data depend on the spectral content of the reflection. The spectrum of the reflected wave may be affected by the presence of thin beds in the formation, which makes Q estimates less reliable. We incorporate a method for correcting the reflected spectrum to remove local thin‐bed effects into the Q‐versus‐offset (QVO) method for determining attenuation from seismic‐reflection data. By dividing the observed spectrum by the local spectrum of the known reflectivity sequence from a nearby well log, we obtain a spectrum more closely resembling that which would be produced by a single primary reflector. This operation, equivalent to deconvolution in the time domain, is demonstrated to be successful using synthetic data. As a test case, we also apply the correction method to QVO with a real seismic line over a south Florida site containing many thin sandstone and carbonate beds. When corrected spectra are used, there is significantly less variance in the estimated Q values, and fewer unphysical negative Q values are obtained. Based on this method, it appears that sediments at the Florida site have a Q near 33 that is roughly constant from 170‐ to 600‐m depth over the length of the line.


Geophysics ◽  
2018 ◽  
Vol 83 (4) ◽  
pp. U35-U41 ◽  
Author(s):  
Changkun Jin ◽  
Jianzhong Zhang

Stereotomography is a robust method for building velocity models from seismic reflection data, and it has been applied to offshore seismic data, but there is almost no stereotomographic study with rugged topographic conditions. We study the topographic effects on the slopes of locally coherent events of seismic data and develop an approach to calculate the slopes on an undulant observation surface using the horizontal and vertical components of slowness vectors estimated. Then, we develop an extended stereotomography with undulant observation surface based on the conventional one. Tests on synthetic data validate the extended stereotomography. Application to the field seismic data in a foothill belt in Xinjiang of the West China indicates that the extended stereotomography is an effective tool to build velocity models for prestack depth migration of seismic data acquired on rugged topography.


Geophysics ◽  
1995 ◽  
Vol 60 (2) ◽  
pp. 341-353 ◽  
Author(s):  
Xiao‐Gui Miao ◽  
Wooil M. Moon ◽  
B. Milkereit

A multioffset, three‐component vertical seismic profiling (VSP) experiment was carried out in the Sudbury Basin, Ontario, as a part of the LITHOPROBE Sudbury Transect. The main objectives were determination of the shallow velocity structure in the middle of the Sudbury Basin, development of an effective VSP data processing flow, correlation of the VSP survey results with the surface seismic reflection data, and demonstration of the usefulness of the VSP method in a crystalline rock environment. The VSP data processing steps included rotation of the horizontal component data, traveltime inversion for velocity analysis, Radon transform for wavefield separation, and preliminary analysis of shear‐wave data. After wavefield separation, the flattened upgoing wavefields for both P‐waves and S‐waves display consistent reflection events from three depth levels. The VSP-CDP transformed section and corridor stacked section correlate well with the high‐resolution surface reflection data. In addition to obtaining realistic velocity models for both P‐ and S‐waves through least‐square inversion and synthetic seismic modeling for the Chelmsford area, the VSP experiment provided an independent estimation for the reflector dip using three component hodogram analysis, which indicates that the dip of the contact between the Chelmsford and Onwatin formations, at an approximate depth of 380 m in the Chelmsford borehole, is approximately 10.5° southeast. This study demonstrates that multioffset, three‐component VSP experiments can provide important constraints and auxiliary information for shallow crustal seismic studies in crystalline terrain. Thus, the VSP technique bridges the gap between the surface seismic‐reflection technique and well‐log surveys.


2016 ◽  
Author(s):  
David K. Smythe

Abstract. North American shale basins differ from their European counterparts in that the latter are one to two orders of magnitude smaller in area, but correspondingly thicker, and are cut or bounded by normal faults penetrating from the shale to the surface. There is thus an inherent risk of groundwater resource contamination via these faults during or after unconventional resource appraisal and development. US shale exploration experience cannot simply be transferred to the UK. The Bowland Basin, with 1900 m of Lower Carboniferous shale, is in the vanguard of UK shale gas development. A vertical appraisal well to test the shale by hydraulic fracturing (fracking), the first such in the UK, triggered earthquakes. Re-interpretation of the 3D seismic reflection data, and independently the well casing deformation data, both show that the well was drilled through the earthquake fault, and did not avoid it, as concluded by the exploration operator. Faulting in this thick shale is evidently difficult to recognise. The Weald Basin is a shallower Upper Jurassic unconventional oil play with stratigraphic similarities to the Bakken play of the Williston Basin, USA. Two Weald licensees have drilled, or have applied to drill, horizontal appraisal wells based on inadequate 2D seismic reflection data coverage. I show, using the data from the one horizontal well drilled to date, that one operator failed identify two small but significant through-going normal faults. The other operator portrayed a seismic line as an example of fault-free structure, but faulting had been smeared out by reprocessing. The case histories presented show that: (1) UK shale exploration to date is characterised by a low degree of technical competence, and (2) regulation, which is divided between four separate authorities, is not up to the task. If UK shale is to be exploited safely: (1) more sophisticated seismic imaging methods need to be developed and applied to both basins, to identify faults in shale with throws as small as 4–5 m, and (2) the current lax and inadequate regulatory regime must be overhauled, unified, and tightened up.


Geophysics ◽  
1985 ◽  
Vol 50 (6) ◽  
pp. 903-923 ◽  
Author(s):  
T. N. Bishop ◽  
K. P. Bube ◽  
R. T. Cutler ◽  
R. T. Langan ◽  
P. L. Love ◽  
...  

Estimation of reflector depth and seismic velocity from seismic reflection data can be formulated as a general inverse problem. The method used to solve this problem is similar to tomographic techniques in medical diagnosis and we refer to it as seismic reflection tomography. Seismic tomography is formulated as an iterative Gauss‐Newton algorithm that produces a velocity‐depth model which minimizes the difference between traveltimes generated by tracing rays through the model and traveltimes measured from the data. The input to the process consists of traveltimes measured from selected events on unstacked seismic data and a first‐guess velocity‐depth model. Usually this first‐guess model has velocities which are laterally constant and is usually based on nearby well information and/or an analysis of the stacked section. The final model generated by the tomographic method yields traveltimes from ray tracing which differ from the measured values in recorded data by approximately 5 ms root‐mean‐square. The indeterminancy of the inversion and the associated nonuniqueness of the output model are both analyzed theoretically and tested numerically. It is found that certain aspects of the velocity field are poorly determined or undetermined. This technique is applied to an example using real data where the presence of permafrost causes a near‐surface lateral change in velocity. The permafrost is successfully imaged in the model output from tomography. In addition, depth estimates at the intersection of two lines differ by a significantly smaller amount than the corresponding estimates derived from conventional processing.


Geophysics ◽  
1967 ◽  
Vol 32 (2) ◽  
pp. 207-224 ◽  
Author(s):  
John D. Marr ◽  
Edward F. Zagst

The more recent developments in common‐depth‐point techniques to attenuate multiple reflections have resulted in an exploration capability comparable to the development of the seismic reflection method. The combination of new concepts in digital seismic data processing with CDP techniques is creating unforeseen exploration horizons with vastly improved seismic data. Major improvements in multiple reflection and reverberation attenuation are now attainable with appropriate CDP geometry and special CDP stacking procedures. Further major improvements are clearly evident in the very near future with the use of multichannel digital filtering‐stacking techniques and the application of deconvolution as the first step in seismic data processing. CDP techniques are briefly reviewed and evaluated with real and experimental data. Synthetic data are used to illustrate that all seismic reflection data should be deconvolved as the first processing step.


Geophysics ◽  
2011 ◽  
Vol 76 (2) ◽  
pp. B55-B70 ◽  
Author(s):  
E. M. Takam Takougang ◽  
A. J. Calvert

To obtain a higher resolution quantitative P-wave velocity model, 2D waveform tomography was applied to seismic reflection data from the Queen Charlotte sedimentary basin off the west coast of Canada. The forward modeling and inversion were implemented in the frequency domain using the visco-acoustic wave equation. Field data preconditioning consisted of f-k filtering, 2D amplitude scaling, shot-to-shot amplitude balancing, and time windowing. The field data were inverted between 7 and 13.66 Hz, with attenuation introduced for frequencies ≥ 10.5 Hz to improve the final velocity model; two different approaches to sampling the frequencies were evaluated. The limited maximum offset of the marine data (3770 m) and the relatively high starting frequency (7 Hz) were the main challenges encountered during the inversion. An inversion strategy that successively recovered shallow-to-deep structures was designed to mitigate these issues. The inclusion of later arrivals in the waveform tomography resulted in a velocity model that extends to a depth of approximately 1200 m, twice the maximum depth of ray coverage in the ray-based tomography. Overall, there is a good agreement between the velocity model and a sonic log from a well on the seismic line, as well as between modeled shot gathers and field data. Anomalous zones of low velocity in the model correspond to previously identified faults or their upward continuation into the shallow Pliocene section where they are not readily identifiable in the conventional migration.


Geophysics ◽  
1998 ◽  
Vol 63 (4) ◽  
pp. 1339-1347 ◽  
Author(s):  
Kate C. Miller ◽  
Steven H. Harder ◽  
Donald C. Adams ◽  
Terry O’Donnell

Shallow seismic reflection surveys commonly suffer from poor data quality in the upper 100 to 150 ms of the stacked seismic record because of shot‐associated noise, surface waves, and direct arrivals that obscure the reflected energy. Nevertheless, insight into lateral changes in shallow structure and stratigraphy can still be obtained from these data by using first‐arrival picks in a refraction analysis to derive a near‐surface velocity model. We have used turning‐ray tomography to model near‐surface velocities from seismic reflection profiles recorded in the Hueco Bolson of West Texas and southern New Mexico. The results of this analysis are interval‐velocity models for the upper 150 to 300 m of the seismic profiles which delineate geologic features that were not interpretable from the stacked records alone. In addition, the interval‐velocity models lead to improved time‐to‐depth conversion; when converted to stacking velocities, they may provide a better estimate of stacking velocities at early traveltimes than other methods.


Geophysics ◽  
1990 ◽  
Vol 55 (5) ◽  
pp. 619-625 ◽  
Author(s):  
Alan R. Mitchell ◽  
Panos G. Kelamis

Time and offset varying velocity filtering can be achieved by limiting the data input to forward tau‐p transforms. This limiting procedure, called hyperbolic velocity filtering (HVF), suppresses transform‐related artifacts as well as coherent and noncoherent noise while retaining elliptical (reflection) events. We show that HVF can be viewed as a muting process in the slant‐stack domain. Based on this simple but physical interpretation of HVF, a more efficient computer implementation is proposed. We further examine possible applications of HVF for processing seismic reflection data and illustrate the results using both synthetic and real data examples.


Sign in / Sign up

Export Citation Format

Share Document