Methods of correlation noise minimization and suppression for multisource acquisition in vibroseis survey

Author(s):  
A. N. Oshkin ◽  
A. I. Kon’kov ◽  
A. V. Tarasov ◽  
A. A. Shuvalov ◽  
V. I. Ignat’ev

The use of several simultaneously operating sources in seismic operations allows one to obtain large amounts of data per unit of time than for classical works with a single source, and also to improve the seismic data recording system. Depending on the type of seismic source used (vibrating or pulsed), different methods of signal separation are used. When working with vibroseismic method, separation of signals becomes possible at the stage of correlative processing of vibrograms. In this paper, we demonstrate methods for constructing noncorrelating signals for use in vibroseis survey (with an example of using such signals on synthetic data) and hyperbolic median filtering to minimize correlation and incoherent noise.

Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. N15-N27 ◽  
Author(s):  
Carlos A. M. Assis ◽  
Henrique B. Santos ◽  
Jörg Schleicher

Acoustic impedance (AI) is a widely used seismic attribute in stratigraphic interpretation. Because of the frequency-band-limited nature of seismic data, seismic amplitude inversion cannot determine AI itself, but it can only provide an estimate of its variations, the relative AI (RAI). We have revisited and compared two alternative methods to transform stacked seismic data into RAI. One is colored inversion (CI), which requires well-log information, and the other is linear inversion (LI), which requires knowledge of the seismic source wavelet. We start by formulating the two approaches in a theoretically comparable manner. This allows us to conclude that both procedures are theoretically equivalent. We proceed to check whether the use of the CI results as the initial solution for LI can improve the RAI estimation. In our experiments, combining CI and LI cannot provide superior RAI results to those produced by each approach applied individually. Then, we analyze the LI performance with two distinct solvers for the associated linear system. Moreover, we investigate the sensitivity of both methods regarding the frequency content present in synthetic data. The numerical tests using the Marmousi2 model demonstrate that the CI and LI techniques can provide an RAI estimate of similar accuracy. A field-data example confirms the analysis using synthetic-data experiments. Our investigations confirm the theoretical and practical similarities of CI and LI regardless of the numerical strategy used in LI. An important result of our tests is that an increase in the low-frequency gap in the data leads to slightly deteriorated CI quality. In this case, LI required more iterations for the conjugate-gradient least-squares solver, but the final results were not much affected. Both methodologies provided interesting RAI profiles compared with well-log data, at low computational cost and with a simple parameterization.


Geophysics ◽  
1983 ◽  
Vol 48 (12) ◽  
pp. 1598-1610 ◽  
Author(s):  
J. Bee Bednar

Seismic exploration problems frequently require analysis of noisy data. Traditional processing removes or reduces noise effects by linear statistical filtering. This filtering process can be viewed as a weighted averaging with coefficients chosen to enhance the data information content. When the signal and noise components occupy separate spectral windows, or when the statistical properties of the noise are sufficiently understood, linear statistical filtering is an effective tool for data enhancement. When the noise properties are not well understood, or when the noise and signal occupy the same spectral window, linear or weighted averaging performs poorly as a signal enhancement process. One must look for alternative procedures to extract the desired information. As a nonlinear operation which is statistically similar to averaging, median filtering represents one potential alternative. This paper investigates the application of median filtering to several seismic data enhancement problems. A methodology for using median filtering as one step in cepstral deconvolution or seismic signature estimation is presented. The median filtering process is applied to statistical editing of acoustic impedance data and the removal of noise bursts from reflection data. The most surprising conclusion obtained from the empirical studies on synthetic data is that, in high‐noise situations, cepstral‐based median filtering appears to perform exceptionally well as a deconvolver but poorly as a signature estimator. For real data, the process is stable and, to the extent that the data follow the convolutional model, does a reasonable job at both pulse estimation and deconvolution.


2020 ◽  
Vol 224 (1) ◽  
pp. 100-120
Author(s):  
Christian Poppeliers ◽  
Leiph Preston

SUMMARY We use Monte Carlo simulations to explore the effects of earth model uncertainty on the estimation of the seismic source time functions that correspond to the six independent components of the point source seismic moment tensor. Specifically, we invert synthetic data using Green’s functions estimated from a suite of earth models that contain stochastic density and seismic wave-speed heterogeneities. We find that the primary effect of earth model uncertainty on the data is that the amplitude of the first-arriving seismic energy is reduced, and that this amplitude reduction is proportional to the magnitude of the stochastic heterogeneities. Also, we find that the amplitude of the estimated seismic source functions can be under- or overestimated, depending on the stochastic earth model used to create the data. This effect is totally unpredictable, meaning that uncertainty in the earth model can lead to unpredictable biases in the amplitude of the estimated seismic source functions.


Geophysics ◽  
1983 ◽  
Vol 48 (7) ◽  
pp. 854-886 ◽  
Author(s):  
Ken Larner ◽  
Ron Chambers ◽  
Mai Yang ◽  
Walt Lynn ◽  
Willon Wai

Despite significant advances in marine streamer design, seismic data are often plagued by coherent noise having approximately linear moveout across stacked sections. With an understanding of the characteristics that distinguish such noise from signal, we can decide which noise‐suppression techniques to use and at what stages to apply them in acquisition and processing. Three general mechanisms that might produce such noise patterns on stacked sections are examined: direct and trapped waves that propagate outward from the seismic source, cable motion caused by the tugging action of the boat and tail buoy, and scattered energy from irregularities in the water bottom and sub‐bottom. Depending upon the mechanism, entirely different noise patterns can be observed on shot profiles and common‐midpoint (CMP) gathers; these patterns can be diagnostic of the dominant mechanism in a given set of data. Field data from Canada and Alaska suggest that the dominant noise is from waves scattered within the shallow sub‐buttom. This type of noise, while not obvious on the shot records, is actually enhanced by CMP stacking. Moreover, this noise is not confined to marine data; it can be as strong as surface wave noise on stacked land seismic data as well. Of the many processing tools available, moveout filtering is best for suppressing the noise while preserving signal. Since the scattered noise does not exhibit a linear moveout pattern on CMP‐sorted gathers, moveout filtering must be applied either to traces within shot records and common‐receiver gathers or to stacked traces. Our data example demonstrates that although it is more costly, moveout filtering of the unstacked data is particularly effective because it conditions the data for the critical data‐dependent processing steps of predictive deconvolution and velocity analysis.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. A17-A21 ◽  
Author(s):  
Juan I. Sabbione ◽  
Mauricio D. Sacchi

The coefficients that synthesize seismic data via the hyperbolic Radon transform (HRT) are estimated by solving a linear-inverse problem. In the classical HRT, the computational cost of the inverse problem is proportional to the size of the data and the number of Radon coefficients. We have developed a strategy that significantly speeds up the implementation of time-domain HRTs. For this purpose, we have defined a restricted model space of coefficients applying hard thresholding to an initial low-resolution Radon gather. Then, an iterative solver that operated on the restricted model space was used to estimate the group of coefficients that synthesized the data. The method is illustrated with synthetic data and tested with a marine data example.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Sign in / Sign up

Export Citation Format

Share Document