scholarly journals Imaging near-surface heterogeneities by natural migration of backscattered surface waves: Field data test

Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. S197-S205 ◽  
Author(s):  
Zhaolun Liu ◽  
Abdullah AlTheyab ◽  
Sherif M. Hanafy ◽  
Gerard Schuster

We have developed a methodology for detecting the presence of near-surface heterogeneities by naturally migrating backscattered surface waves in controlled-source data. The near-surface heterogeneities must be located within a depth of approximately one-third the dominant wavelength [Formula: see text] of the strong surface-wave arrivals. This natural migration method does not require knowledge of the near-surface phase-velocity distribution because it uses the recorded data to approximate the Green’s functions for migration. Prior to migration, the backscattered data are separated from the original records, and the band-passed filtered data are migrated to give an estimate of the migration image at a depth of approximately one-third [Formula: see text]. Each band-passed data set gives a migration image at a different depth. Results with synthetic data and field data recorded over known faults validate the effectiveness of this method. Migrating the surface waves in recorded 2D and 3D data sets accurately reveals the locations of known faults. The limitation of this method is that it requires a dense array of receivers with a geophone interval less than approximately one-half [Formula: see text].

Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. V213-V225 ◽  
Author(s):  
Shaohuan Zu ◽  
Hui Zhou ◽  
Yangkang Chen ◽  
Shan Qu ◽  
Xiaofeng Zou ◽  
...  

We have designed a periodically varying code that can avoid the problem of the local coherency and make the interference distribute uniformly in a given range; hence, it was better at suppressing incoherent interference (blending noise) and preserving coherent useful signals compared with a random dithering code. We have also devised a new form of the iterative method to remove interference generated from the simultaneous source acquisition. In each iteration, we have estimated the interference using the blending operator following the proposed formula and then subtracted the interference from the pseudodeblended data. To further eliminate the incoherent interference and constrain the inversion, the data were then transformed to an auxiliary sparse domain for applying a thresholding operator. During the iterations, the threshold was decreased from the largest value to zero following an exponential function. The exponentially decreasing threshold aimed to gradually pass the deblended data to a more acceptable model subspace. Two numerically blended synthetic data sets and one numerically blended practical field data set from an ocean bottom cable were used to demonstrate the usefulness of our proposed method and the better performance of the periodically varying code over the traditional random dithering code.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. S87-S100 ◽  
Author(s):  
Hao Hu ◽  
Yike Liu ◽  
Yingcai Zheng ◽  
Xuejian Liu ◽  
Huiyi Lu

Least-squares migration (LSM) can be effective to mitigate the limitation of finite-seismic acquisition, balance the subsurface illumination, and improve the spatial resolution of the image, but it requires iterations of migration and demigration to obtain the desired subsurface reflectivity model. The computational efficiency and accuracy of migration and demigration operators are crucial for applying the algorithm. We have developed a test of the feasibility of using the Gaussian beam as the wavefield extrapolating operator for the LSM, denoted as least-squares Gaussian beam migration. Our method combines the advantages of the LSM and the efficiency of the Gaussian beam propagator. Our numerical evaluations, including two synthetic data sets and one marine field data set, illustrate that the proposed approach could be used to obtain amplitude-balanced images and to broaden the bandwidth of the migrated images in particular for the low-wavenumber components.


Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. B173-B185 ◽  
Author(s):  
Michael S. McMillan ◽  
Douglas W. Oldenburg

We evaluated a method for cooperatively inverting multiple electromagnetic (EM) data sets with bound constraints to produce a consistent 3D resistivity model with improved resolution. Field data from the Antonio gold deposit in Peru and synthetic data were used to demonstrate this technique. We first separately inverted field airborne time-domain EM (AEM), controlled-source audio-frequency magnetotellurics (CSAMT), and direct current resistivity measurements. Each individual inversion recovered a resistor related to gold-hosted silica alteration within a relatively conductive background. The outline of the resistor in each inversion was in reasonable agreement with the mapped extent of known near-surface silica alteration. Variations between resistor recoveries in each 3D inversion model motivated a subsequent cooperative method, in which AEM data were inverted sequentially with a combined CSAMT and DC data set. This cooperative approach was first applied to a synthetic inversion over an Antonio-like simulated resistivity model, and the inversion result was both qualitatively and quantitatively closer to the true synthetic model compared to individual inversions. Using the same cooperative method, field data were inverted to produce a model that defined the target resistor while agreeing with all data sets. To test the benefit of borehole constraints, synthetic boreholes were added to the inversion as upper and lower bounds at locations of existing boreholes. The ensuing cooperative constrained synthetic inversion model had the closest match to the true simulated resistivity distribution. Bound constraints from field boreholes were then calculated by a regression relationship among the total sulfur content, alteration type, and resistivity measurements from rock samples and incorporated into the inversion. The resulting cooperative constrained field inversion model clearly imaged the resistive silica zone, extended the area of interpreted alteration, and also highlighted conductive zones within the resistive region potentially linked to sulfide and gold mineralization.


Geophysics ◽  
1985 ◽  
Vol 50 (11) ◽  
pp. 1701-1720 ◽  
Author(s):  
Glyn M. Jones ◽  
D. B. Jovanovich

A new technique is presented for the inversion of head‐wave traveltimes to infer near‐surface structure. Traveltimes computed along intersecting pairs of refracted rays are used to reconstruct the shape of the first refracting horizon beneath the surface and variations in refractor velocity along this boundary. The information derived can be used as the basis for further processing, such as the calculation of near‐surface static delays. One advantage of the method is that the shape of the refractor is determined independently of the refractor velocity. With multifold coverage, rapid lateral changes in refractor geometry or velocity can be mapped. Two examples of the inversion technique are presented: one uses a synthetic data set; the other is drawn from field data shot over a deep graben filled with sediment. The results obtained using the synthetic data validate the method and support the conclusions of an error analysis, in which errors in the refractor velocity determined using receivers to the left and right of the shots are of opposite sign. The true refractor velocity therefore falls between the two sets of estimates. The refraction image obtained by inversion of the set of field data is in good agreement with a constant‐velocity reflection stack and illustrates that the ray inversion method can handle large lateral changes in refractor velocity or relief.


Geophysics ◽  
2013 ◽  
Vol 78 (5) ◽  
pp. M29-M41 ◽  
Author(s):  
Mahdi H. Almutlaq ◽  
Gary F. Margrave

We evaluated the concept of surface-consistent matching filters for processing time-lapse seismic data, in which matching filters are convolutional filters that minimize the sum-squared error between two signals. Because in the Fourier domain a matching filter is the spectral ratio of the two signals, we extended the well-known surface-consistent hypothesis such that the data term is a trace-by-trace spectral ratio of two data sets instead of only one (i.e., surface-consistent deconvolution). To avoid unstable division of spectra, we computed the spectral ratios in the time domain by first designing trace-sequential, least-squares matching filters, then Fourier transforming them. A subsequent least-squares solution then factored the trace-sequential matching filters into four operators: two surface-consistent (source and receiver) and two subsurface-consistent (offset and midpoint). We evaluated a time-lapse synthetic data set with nonrepeatable acquisition parameters, complex near-surface geology, and a variable subsurface reservoir layer. We computed the four-operator surface-consistent matching filters from two surveys, baseline and monitor, then applied these matching filters to the monitor survey to match it to the baseline survey over a temporal window where changes were not expected. This algorithm significantly reduced the effect of most of the nonrepeatable parameters, such as differences in source strength, receiver coupling, wavelet bandwidth and phase, and static shifts. We computed the normalized root mean square difference on raw stacked data (baseline and monitor) and obtained a mean value of 70%. This value was significantly reduced after applying the 4C surface-consistent matching filters to about 13.6% computed from final stacks.


Geophysics ◽  
2021 ◽  
pp. 1-47
Author(s):  
N. A. Vinard ◽  
G. G. Drijkoningen ◽  
D. J. Verschuur

Hydraulic fracturing plays an important role when it comes to the extraction of resources in unconventional reservoirs. The microseismic activity arising during hydraulic fracturing operations needs to be monitored to both improve productivity and to make decisions about mitigation measures. Recently, deep learning methods have been investigated to localize earthquakes given field-data waveforms as input. For optimal results, these methods require large field data sets that cover the entire region of interest. In practice, such data sets are often scarce. To overcome this shortcoming, we propose initially to use a (large) synthetic data set with full waveforms to train a U-Net that reconstructs the source location as a 3D Gaussian distribution. As field data set for our study we use data recorded during hydraulic fracturing operations in Texas. Synthetic waveforms were modelled using a velocity model from the site that was also used for a conventional diffraction-stacking (DS) approach. To increase the U-Nets’ ability to localize seismic events, we augmented the synthetic data with different techniques, including the addition of field noise. We select the best performing U-Net using 22 events that have previously been identified to be confidently localized by DS and apply that U-Net to all 1245 events. We compare our predicted locations to DS and the DS locations refined by a relative location (DSRL) method. The U-Net based locations are better constrained in depth compared to DS and the mean hypocenter difference with respect to DSRL locations is 163 meters. This shows potential for the use of synthetic data to complement or replace field data for training. Furthermore, after training, the method returns the source locations in near real-time given the full waveforms, alleviating the need to pick arrival times.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB153-WB164 ◽  
Author(s):  
William Curry ◽  
Guojian Shan

Reflection seismic data typically are undersampled. Missing near offsets can be interpolated in reflection seismic data with pseudoprimaries, generated by crosscorrelating multiples and primaries in incomplete recorded data. These pseudoprimary data can be generated at the missing near offsets but contain many artifacts, so it is undesirable simply to replace the missing data with the pseudoprimaries. A nonstationary prediction-error filter (PEF) can instead be estimated from the pseudoprimaries and used to interpolate missing data to produce an interpolated output that is superior to direct substitution of the pseudoprimaries into the missing offsets. This approach is applied successfully to 2D synthetic and field data. Limitations in conventional acquisition geometry limit this approach in 3D, which can be illustrated using a synthetic data set.


Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. T265-T284 ◽  
Author(s):  
Joost van der Neut ◽  
Kees Wapenaar

Iterative substitution of the multidimensional Marchenko equation has been introduced recently to integrate internal multiple reflections in the seismic imaging process. In so-called Marchenko imaging, a macro velocity model of the subsurface is required to meet this objective. The model is used to back-propagate the data during the first iteration and to truncate integrals in time during all successive iterations. In case of an erroneous model, the image will be blurred (akin to conventional imaging) and artifacts may arise from inaccurate integral truncations. However, the scheme is still successful in removing artifacts from internal multiple reflections. Inspired by these observations, we rewrote the Marchenko equation, such that it can be applied early in a processing flow, without the need of a macro velocity model. Instead, we have required an estimate of the two-way traveltime surface of a selected horizon in the subsurface. We have introduced an approximation, such that adaptive subtraction can be applied. As a solution, we obtained a new data set, in which all interactions (primaries and multiples) with the part of the medium above the picked horizon had been eliminated. Unlike various other internal multiple elimination algorithms, the method can be applied at any specified target horizon, without having to resolve for internal multiples from shallower horizons. We successfully applied the method on synthetic data, where limitations were reported due to thin layers, diffraction-like discontinuities, and a finite acquisition aperture. A field data test was also performed, in which the kinematics of the predicted updates were demonstrated to match with internal multiples in the recorded data, but it appeared difficult to subtract them.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Sign in / Sign up

Export Citation Format

Share Document