Broadband receiver response from dual‐streamer data and applications in deep reflection seismology

Geophysics ◽  
1996 ◽  
Vol 61 (1) ◽  
pp. 232-243 ◽  
Author(s):  
Satish C. Singh ◽  
R. W. Hobbs ◽  
D. B. Snyder

A method to process dual‐streamer data with under and over configuration is presented. The method combines the results of dephase‐sum and dephase‐subtraction methods. In the dephase methods, the response of one streamer is time shifted so that the primary arrivals on both streamers are aligned, and these responses are then summed or subtracted. The method provides a broad spectral response from dual‐streamer data and increases the signal‐to‐noise ratio by a factor of 1.5. Testing was done on synthetic data and then applied to a real data set collected by the British Institutions Reflection Profiling Syndicate (BIRPS). Its application to a deep seismic reflection data set from the British Isles shows that the reflections from the lower crust contain frequencies up to 80 Hz, suggesting that some of the lower crustal reflectors may have sharp boundaries and could be 20–30 m thick.

Geophysics ◽  
2008 ◽  
Vol 73 (1) ◽  
pp. V1-V9 ◽  
Author(s):  
Chun-Feng Li ◽  
Christopher Liner

Although the passage of singularity information from acoustic impedance to seismic traces is now well understood, it remains unanswered how routine seismic processing, mode conversions, and multiple reflections can affect the singularity analysis of surface seismic data. We make theoretical investigations on the transition of singularity behaviors from acoustic impedances to surface seismic data. We also perform numerical, wavelet-based singularity analysis on an elastic synthetic data set that is processed through routine seismic processing steps (such as stacking and migration) and that contains mode conversions, multiple reflections, and other wave-equation effects. Theoretically, seismic traces can be approximated as proportional to a smoothed version of the [Formula: see text] derivative of acoustic impedance,where [Formula: see text] is the vanishing moment of the seismic wavelet. This theoretical approach forms the basis of linking singularity exponents (Hölder exponents) in acoustic impedance with those computable from seismic data. By using wavelet-based multiscale analysis with complex Morlet wavelets, we can estimate singularity strengths and localities in subsurface impedance directly from surface seismic data. Our results indicate that rich singularity information in acoustic impedance variations can be preserved by surface seismic data despite data-acquisition and processing activities. We also show that high-resolution detection of singularities from real surface seismic data can be achieved with a proper choice of the scale of the mother wavelet in the wavelet transform. Singularity detection from surface seismic data thus can play a key role in stratigraphic analysis and acoustic impedance inversion.


Geophysics ◽  
1996 ◽  
Vol 61 (1) ◽  
pp. 202-210 ◽  
Author(s):  
R. G. van Borselen ◽  
J. T. Fokkema ◽  
P. M. van den Berg

Removal of the effects of the free surface from seismic reflection data is an essential preprocessing step before prestack migration. The problem can be formulated by means of Rayleigh’s reciprocity theorem which leads to an integral equation of the second kind for the desired pressure field that does not include these free‐surface effects. This integral equation can be solved numerically, both in the spatial domain and in the double Radon domain. Solving the integral equation in the double Radon domain has the advantage of reducing the computation time significantly since the kernel of the integral equation becomes dominant diagonally. Two methods are proposed to solve the integral equation: direct matrix inversion and a recursive subtraction of the free‐surface multiples using a Neumann series. Both methods have been developed and tested on a synthetic data set, which was computed with the help of an independent forward‐modeling scheme.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. V171-V183 ◽  
Author(s):  
Sönke Reiche ◽  
Benjamin Berkels

Stacking of multichannel seismic reflection data is a crucial step in seismic data processing, usually leading to the first interpretable seismic image. Stacking is preceded by traveltime correction, in which all events contained in a common-midpoint (CMP) gather are corrected for their offset-dependent traveltime increase. Such corrections are often based on the assumption of hyperbolic traveltime curves, and a best fit hyperbola is usually sought for each reflection by careful determination of stacking velocities. However, assuming hyperbolic traveltime curves is not accurate in many situations, e.g., in the case of strongly curved reflectors, large offset-to-target-ratios, or strong anisotropy. Here, we found that an underlying model parameterizing the shape of the traveltime curve is not a strict necessity for producing high-quality stacks. Based on nonrigid image-matching techniques, we developed an alternative way of stacking, both independent of a reference velocity model and any prior assumptions regarding the shape of the traveltime curve. Mathematically, our stacking operator is based on a variational approach that transforms a series of seismic traces contained within a CMP gather into a common reference frame. Based on the normalized crosscorrelation and regularized by penalizing irregular displacements, time shifts are sought for each sample to minimize the discrepancy between a zero-offset trace and traces with larger offsets. Time shifts are subsequently exported as a data attribute and can easily be converted to stacking velocities. To demonstrate the feasibility of this approach, we apply it to simple and complex synthetic data and finally to a real seismic line. We find that our new method produces stacks of equal quality and velocity models of slightly better quality compared with an automated, hyperbolic traveltime correction and stacking approach for complex synthetic and real data cases.


2019 ◽  
Vol 7 (2) ◽  
pp. T255-T263 ◽  
Author(s):  
Yanli Liu ◽  
Zhenchun Li ◽  
Guoquan Yang ◽  
Qiang Liu

The quality factor ([Formula: see text]) is an important parameter for measuring the attenuation of seismic waves. Reliable [Formula: see text] estimation and stable inverse [Formula: see text] filtering are expected to improve the resolution of seismic data and deep-layer energy. Many methods of estimating [Formula: see text] are based on an individual wavelet. However, it is difficult to extract the individual wavelet precisely from seismic reflection data. To avoid this problem, we have developed a method of directly estimating [Formula: see text] from reflection data. The core of the methodology is selecting the peak-frequency points to linear fit their logarithmic spectrum and time-frequency product. Then, we calculated [Formula: see text] according to the relationship between [Formula: see text] and the optimized slope. First, to get the peak frequency points at different times, we use the generalized S transform to produce the 2D high-precision time-frequency spectrum. According to the seismic wave attenuation mechanism, the logarithmic spectrum attenuates linearly with the product of frequency and time. Thus, the second step of the method is transforming a 2D spectrum into 1D by variable substitution. In the process of transformation, we only selected the peak frequency points to participate in the fitting process, which can reduce the impact of the interference on the spectrum. Third, we obtain the optimized slope by least-squares fitting. To demonstrate the reliability of our method, we applied it to a constant [Formula: see text] model and the real data of a work area. For the real data, we calculated the [Formula: see text] curve of the seismic trace near a well and we get the high-resolution section by using stable inverse [Formula: see text] filtering. The model and real data indicate that our method is effective and reliable for estimating the [Formula: see text] value.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2021 ◽  
Author(s):  
Eric Roots ◽  
Graham Hill ◽  
Ben M. Frieman ◽  
James A. Craven ◽  
Richard S. Smith ◽  
...  

<p>The role of melts and magmatic/metamorphic fluids in mineralization processes is well established. However, the role of crustal architecture in defining source and sink zones in the middle to lower crust remains enigmatic. Integration of three dimensional magnetotelluric (MT) modelling and seismic reflection data across the Archean Abitibi greenstone belt of the Superior Province, Canada, reveals a ‘whole-of-crust’ mineralizing system and highlights the controls by crustal architecture on metallogenetic processes. Electrically conductive conduits in an otherwise resistive upper crust are coincident with truncations and offsets of seismic reflections that are mostly interpreted as major brittle-ductile fault zones. The spatial association between these features and low resistivity zones imaged in the 3D models suggest that these zones acted as pathways through which fluids and melts ascended toward the surface. At mid-crustal levels, these ‘conduit’ zones connect to ~50 km long, north-south striking conductors, and are inferred to represent graphite and/or sulphide deposited from cooling fluids. At upper mantle to lower crustal depths, east-west trending conductive zones dominate and display shallow dips. The upper mantle features are broadly coincident with the surface traces of the major deformation zones with which a large proportion of the gold endowment is associated. We suggest that these deep conductors represent interconnected graphitic zones perhaps augmented by sulphides that are relicts from metamorphic fluid and melt emplacement associated primarily with the later stages of regional deformation.  Thus, from the combined MT and seismic data, we develop a crustal-scale architectural model that is consistent with existing geological and deformational models, providing constraints on the sources for and signatures of fluid and magma emplacement that resulted in widespread metallogenesis in the Abitibi Subprovince.</p>


Geophysics ◽  
1998 ◽  
Vol 63 (4) ◽  
pp. 1395-1407 ◽  
Author(s):  
Frank Büker ◽  
Alan G. Green ◽  
Heinrich Horstmeyer

Shallow seismic reflection data were recorded along two long (>1.6 km) intersecting profiles in the glaciated Suhre Valley of northern Switzerland. Appropriate choice of source and receiver parameters resulted in a high‐fold (36–48) data set with common midpoints every 1.25 m. As for many shallow seismic reflection data sets, upper portions of the shot gathers were contaminated with high‐amplitude, source‐generated noise (e.g., direct, refracted, guided, surface, and airwaves). Spectral balancing was effective in significantly increasing the strength of the reflected signals relative to the source‐generated noise, and application of carefully selected top mutes ensured guided phases were not misprocessed and misinterpreted as reflections. Resultant processed sections were characterized by distributions of distinct seismic reflection patterns or facies that were bounded by quasi‐continuous reflection zones. The uppermost reflection zone at 20 to 50 ms (∼15 to ∼40 m depth) originated from a boundary between glaciolacustrine clays/silts and underlying glacial sands/gravels (till) deposits. Of particular importance was the discovery that the deepest part of the valley floor appeared on the seismic section at traveltimes >180 ms (∼200 m), approximately twice as deep as expected. Constrained by information from boreholes adjacent to the profiles, the various seismic units were interpreted in terms of unconsolidated glacial, glaciofluvial, and glaciolacustrine sediments deposited during two principal phases of glaciation (Riss at >100 000 and Würm at ∼18 000 years before present).


2021 ◽  

The most utilized technique for exploring the Earth's subsurface for petroleum is reflection seismology. However, a sole focus on reflection seismology often misses opportunities to integrate other geophysical techniques such as gravity, magnetic, resistivity, and other seismicity techniques, which have tended to be used in isolation and by specialist teams. There is now growing appreciation that these technologies used in combination with reflection seismology can produce more accurate images of the subsurface. This book describes how these different field techniques can be used individually and in combination with each other and with seismic reflection data. World leading experts present chapters covering different techniques and describe when, where, and how to apply them to improve petroleum exploration and production. It also explores the use of such techniques in monitoring CO2 storage reservoirs. Including case studies throughout, it will be an invaluable resource for petroleum industry professionals, advanced students, and researchers.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. Q27-Q37
Author(s):  
Yang Shen ◽  
Jie Zhang

Refraction methods are often applied to model and image near-surface velocity structures. However, near-surface imaging is very challenging, and no single method can resolve all of the land seismic problems across the world. In addition, deep interfaces are difficult to image from land reflection data due to the associated low signal-to-noise ratio. Following previous research, we have developed a refraction wavefield migration method for imaging shallow and deep interfaces via interferometry. Our method includes two steps: converting refractions into virtual reflection gathers and then applying a prestack depth migration method to produce interface images from the virtual reflection gathers. With a regular recording offset of approximately 3 km, this approach produces an image of a shallow interface within the top 1 km. If the recording offset is very long, the refractions may follow a deep path, and the result may reveal a deep interface. We determine several factors that affect the imaging results using synthetics. We also apply the novel method to one data set with regular recording offsets and another with far offsets; both cases produce sharp images, which are further verified by conventional reflection imaging. This method can be applied as a promising imaging tool when handling practical cases involving data with excessively weak or missing reflections but available refractions.


2020 ◽  
Vol 223 (3) ◽  
pp. 1565-1583
Author(s):  
Hoël Seillé ◽  
Gerhard Visser

SUMMARY Bayesian inversion of magnetotelluric (MT) data is a powerful but computationally expensive approach to estimate the subsurface electrical conductivity distribution and associated uncertainty. Approximating the Earth subsurface with 1-D physics considerably speeds-up calculation of the forward problem, making the Bayesian approach tractable, but can lead to biased results when the assumption is violated. We propose a methodology to quantitatively compensate for the bias caused by the 1-D Earth assumption within a 1-D trans-dimensional Markov chain Monte Carlo sampler. Our approach determines site-specific likelihood functions which are calculated using a dimensionality discrepancy error model derived by a machine learning algorithm trained on a set of synthetic 3-D conductivity training images. This is achieved by exploiting known geometrical dimensional properties of the MT phase tensor. A complex synthetic model which mimics a sedimentary basin environment is used to illustrate the ability of our workflow to reliably estimate uncertainty in the inversion results, even in presence of strong 2-D and 3-D effects. Using this dimensionality discrepancy error model we demonstrate that on this synthetic data set the use of our workflow performs better in 80 per cent of the cases compared to the existing practice of using constant errors. Finally, our workflow is benchmarked against real data acquired in Queensland, Australia, and shows its ability to detect the depth to basement accurately.


Sign in / Sign up

Export Citation Format

Share Document