Variable-depth streamer acquisition: Broadband data for imaging and inversion

Geophysics ◽  
2013 ◽  
Vol 78 (2) ◽  
pp. WA27-WA39 ◽  
Author(s):  
Robert Soubaras ◽  
Yves Lafet

Conventional marine acquisition uses a streamer towed at a constant depth. The resulting receiver ghost notch gives the maximum recoverable frequency. To push this limit, the streamer must be towed at a quite shallow depth, but this compromises the low frequencies. Variable-depth streamer (VDS) acquisition is an acquisition technique aimed at achieving the best possible signal-to-noise ratio at low frequencies by towing the streamer very deeply, but by using a depth profile varying with offset in order not to limit the high-frequency bandwidth by notches as in conventional constant-depth streamer acquisition. The idea is to use notch diversity, each receiver having a different notch, so that the final result, combining different receivers, will have no notches. The key step to process VDS acquisitions is the receiver deghosting. We found that the optimal receiver deghosting, instead of being a preprocessing step, should be done postimaging, by using a dual-input, migration and mirror migration, and a new joint deconvolution algorithm that produces a 3D real amplitude deghosted output. This method can be applied poststack, the inputs being the migration and mirror migration images and the output being the deghosted image. Using a multichannel joint deconvolution, the inputs are the migrated and mirror migrated image gathers and the outputs are the prestack deghosted image gathers. This method preserves the amplitude-versus-offset behavior, as the deghosted output can be seen on synthetic examples to be equal to a reference computed by migrating the data modeled without any reflecting water surface. A real data set was used to illustrate this method, and another one was used to check the possibility of performing prestack elastic inversion on the deghosted gathers.

Geophysics ◽  
1995 ◽  
Vol 60 (3) ◽  
pp. 796-809 ◽  
Author(s):  
Zhong‐Min Song ◽  
Paul R. Williamson ◽  
R. Gerhard Pratt

In full‐wave inversion of seismic data in complex media it is desirable to use finite differences or finite elements for the forward modeling, but such methods are still prohibitively expensive when implemented in 3-D. Full‐wave 2-D inversion schemes are of limited utility even in 2-D media because they do not model 3-D dynamics correctly. Many seismic experiments effectively assume that the geology varies in two dimensions only but generate 3-D (point source) wavefields; that is, they are “two‐and‐one‐half‐dimensional” (2.5-D), and this configuration can be exploited to model 3-D propagation efficiently in such media. We propose a frequency domain full‐wave inversion algorithm which uses a 2.5-D finite difference forward modeling method. The calculated seismogram can be compared directly with real data, which allows the inversion to be iterated. We use a descents‐related method to minimize a least‐squares measure of the wavefield mismatch at the receivers. The acute nonlinearity caused by phase‐wrapping, which corresponds to time‐domain cycle‐skipping, is avoided by the strategy of either starting the inversion using a low frequency component of the data or constructing a starting model using traveltime tomography. The inversion proceeds by stages at successively higher frequencies across the observed bandwidth. The frequency domain is particularly efficient for crosshole configurations and also allows easy incorporation of attenuation, via complex velocities, in both forward modeling and inversion. This also requires the introduction of complex source amplitudes into the inversion as additional unknowns. Synthetic studies show that the iterative scheme enables us to achieve the theoretical maximum resolution for the velocity reconstruction and that strongly attenuative zones can be recovered with reasonable accuracy. Preliminary results from the application of the method to a real data set are also encouraging.


Geophysics ◽  
1990 ◽  
Vol 55 (5) ◽  
pp. 527-538 ◽  
Author(s):  
E. Crase ◽  
A. Pica ◽  
M. Noble ◽  
J. McDonald ◽  
A. Tarantola

Nonlinear elastic waveform inversion has advanced to the point where it is now possible to invert real multiple‐shot seismic data. The iterative gradient algorithm that we employ can readily accommodate robust minimization criteria which tend to handle many types of seismic noise (noise bursts, missing traces, etc.) better than the commonly used least‐squares minimization criteria. Although there are many robust criteria from which to choose, we have tested only a few. In particular, the Cauchy criterion and the hyperbolic secant criterion perform very well in both noise‐free and noise‐added inversions of numerical data. Although the real data set, which we invert using the sech criterion, is marine (pressure sources and receivers) and is very much dominated by unconverted P waves, we can, for the most part, resolve the short wavelengths of both P impedance and S impedance. The long wavelengths of velocity (the background) are assumed known. Because we are deriving nearly all impedance information from unconverted P waves in this inversion, data acquisition geometry must have sufficient multiplicity in subsurface coverage and a sufficient range of offsets, just as in amplitude‐versus‐offset (AVO) inversion. However, AVO analysis is implicitly contained in elastic waveform inversion algorithms as part of the elastic wave equation upon which the algorithms are based. Because the real‐data inversion is so large—over 230,000 unknowns (340,000 when density is included) and over 600,000 data values—most statistical analyses of parameter resolution are not feasible. We qualitatively verify the resolution of our results by inverting a numerical data set which has the same acquisition geometry and corresponding long wavelengths of velocity as the real data, but has semirandom perturbations in the short wavelengths of P and S impedance.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. Q37-Q44 ◽  
Author(s):  
Margherita Corciulo ◽  
Philippe Roux ◽  
Michel Campillo ◽  
Dominique Dubucq

Recent studies in geophysics have investigated the use of seismic-noise correlations to measure weak-velocity variations from seismic-noise recordings. However, classically, the existing algorithms used to monitor medium velocities need extensive efforts in terms of computation time. This implies that these techniques are not appropriate at smaller scales in an exploration context when continuous data sets on dense arrays of sensors have to be analyzed. We applied a faster technique that allows the monitoring of small velocity changes from the instantaneous phase measurement of the seismic-noise crosscorrelation functions. We performed comparisons with existing algorithms using synthetic signals. The results we have obtained for a real data set show that the statistical distribution of the velocity-change estimates provides reliable measurements, despite the low signal-to-noise ratio obtained from the noise-correlation process.


2018 ◽  
Vol 6 (1) ◽  
pp. T145-T161 ◽  
Author(s):  
Ekaterina Kneller ◽  
Manuel Peiro

Towed-streamer marine broadband data have been key contributors to recent petroleum exploration history, in new frontiers and in mature basins around the world. They have improved the characterization of reservoirs by reducing the uncertainty in structural and stratigraphic interpretation and by providing more quantitative estimates of reservoir properties. Dedicated acquisition, processing, and quality control (QC) methods have been developed to capitalize on the broad bandwidth of the data and allow their rapid integration into reservoir models. Using a variable-depth steamer data set acquired in the Campos Basin, Brazil, we determine that particular care that should be taken when processing and inverting broadband data to realize their full potential for reservoir interpretation and uncertainty management in the reservoir model. In particular, we determine the QC implemented and interpretative processing approach used to monitor data improvements during processing and preconditioning for elastic inversion. In addition, we evaluate the importance of properly modeling the low frequencies during wavelet estimation. We find the benefits of carefully processed broadband data for structural interpretation and describe the application of acoustic and elastic inversions cascaded with Bayesian lithofacies classification, to provide clear interpretative products with which we were able to demonstrate a reduction in the uncertainty of the prediction and characterization of Santonian oil sandstones in the Campos Basin.


Geophysics ◽  
2021 ◽  
pp. 1-67
Author(s):  
Hossein Jodeiri Akbari Fam ◽  
Mostafa Naghizadeh ◽  
Oz Yilmaz

Two-dimensional seismic surveys often are conducted along crooked line traverses due to the inaccessibility of rugged terrains, logistical and environmental restrictions, and budget limitations. The crookedness of line traverses, irregular topography, and complex subsurface geology with steeply dipping and curved interfaces could adversely affect the signal-to-noise ratio of the data. The crooked-line geometry violates the assumption of a straight-line survey that is a basic principle behind the 2D multifocusing (MF) method and leads to crossline spread of midpoints. Additionally, the crooked-line geometry can give rise to potential pitfalls and artifacts, thus, leads to difficulties in imaging and velocity-depth model estimation. We develop a novel multifocusing algorithm for crooked-line seismic data and revise the traveltime equation accordingly to achieve better signal alignment before stacking. Specifically, we present a 2.5D multifocusing reflection traveltime equation, which explicitly takes into account the midpoint dispersion and cross-dip effects. The new formulation corrects for normal, inline, and crossline dip moveouts simultaneously, which is significantly more accurate than removing these effects sequentially. Applying NMO, DMO, and CDMO separately tends to result in significant errors, especially for large offsets. The 2.5D multifocusing method can perform automatically with a coherence-based global optimization search on data. We investigated the accuracy of the new formulation by testing it on different synthetic models and a real seismic data set. Applying the proposed approach to the real data led to a high-resolution seismic image with a significant quality improvement compared to the conventional method. Numerical tests show that the new formula can accurately focus the primary reflections at their correct location, remove anomalous dip-dependent velocities, and extract true dips from seismic data for structural interpretation. The proposed method efficiently projects and extracts valuable 3D structural information when applied to crooked-line seismic surveys.


Geophysics ◽  
1996 ◽  
Vol 61 (1) ◽  
pp. 232-243 ◽  
Author(s):  
Satish C. Singh ◽  
R. W. Hobbs ◽  
D. B. Snyder

A method to process dual‐streamer data with under and over configuration is presented. The method combines the results of dephase‐sum and dephase‐subtraction methods. In the dephase methods, the response of one streamer is time shifted so that the primary arrivals on both streamers are aligned, and these responses are then summed or subtracted. The method provides a broad spectral response from dual‐streamer data and increases the signal‐to‐noise ratio by a factor of 1.5. Testing was done on synthetic data and then applied to a real data set collected by the British Institutions Reflection Profiling Syndicate (BIRPS). Its application to a deep seismic reflection data set from the British Isles shows that the reflections from the lower crust contain frequencies up to 80 Hz, suggesting that some of the lower crustal reflectors may have sharp boundaries and could be 20–30 m thick.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. E69-E87 ◽  
Author(s):  
Dikun Yang ◽  
Douglas W. Oldenburg

Numerical modeling and inversion of electromagnetic (EM) data is a computationally intensive task. To achieve efficiency, we have developed algorithms that were constructed from a smallest practical computational unit. This “atomic” building block, which yields the solution of Maxwell’s equations for a single time or frequency datum due to an infinitesimal current or magnetic dipole, is a self-contained EM problem that can be solved independently and inexpensively on a single core of CPU. Any EM data set can be composed from these units through assembling or superposition. This approach takes advantage of the rapidly expanding capability of multiprocessor computation. Our decomposition has allowed us to handle the computational complexity that arises because of the physical size of the survey, the large number of transmitters, and the large range of time or frequency in a data set; we did this by modeling every datum separately on customized local meshes and local time-stepping schemes. The counterpart to efficiency with atomic decomposition was that the number of independent subproblems could become very large. We have realized that not all of the data need to be considered at all stages of the inversion. Rather, the data can be significantly downsampled at late times or low frequencies and at the early stages of inversion when only long-wavelength signals are sought. We have therefore developed a random data subsampling approach, in conjunction with cross-validation, that selects data in accordance to the spatial scales of the EM induction and the degree of regularization. Alternatively, for many EM surveys, the atomic units can be combined into larger subproblems, thus reducing the number of subproblems needed. These trade-offs were explored for airborne and ground large-loop systems with specific survey configurations being considered. Our synthetic and field examples showed that the proposed framework can produce 3D inversion results in uncompromised quality in a more scalable manner.


2012 ◽  
Vol 30 (4) ◽  
pp. 545 ◽  
Author(s):  
Quézia C. dos Santos ◽  
Milton José Porsani

Os dados sísmicos terrestres, geralmente, apresentam baixa razão sinal-ruído devido, entre outros fatores, à presença deground roll, um ruído caracterizado por eventos coerentes e lineares, com altas amplitudes, baixas frequências temporais e baixas velocidades e, na maioria dos casos, dispersivos, que se sobrepõem àsreflexões, prejudicando o processamento e a interpretação dos dados. Quando a tentativa de atenuar oground roll durante a aquisição dos dados (utilizando arranjos de fontes e receptores) falha, diversos métodos podem ser empregados no processamento. Neste trabalho, discute-se um método de filtragem baseado no filtrode forma de Wiener, sua implementação e seus principais parâmetros. Também é apresentada uma variante do método, baseada no algoritmo de deconvolução direta. Os resultados da aplicação da filtragem direta em dados sísmicos reais são bastante satisfatórios, quando comparados com aqueles obtidos com os métodos convencionais FK e corta-baixas. ABSTRACT: Onshore seismic data often have low signal to noise ratio due to, among other factors, the presence of ground-roll, a noise characterized by coherent,linear and dispersive events with high amplitudes, low frequencies and velocities. This noise overlaps with reflections, hindering the data processing and interpretation.When the attempts to reduce the ground-roll during data acquisition (using source and receiver arrays) fail, several methods can be used in seismic processing. Herewe discuss a filtering method based on Wiener shaping filter, its implementation and its main parameters. We also present a different approach based on the directdeconvolution algorithm. The results of the application of direct methods to a real seismic data set are quite satisfactory when compared with those obtained withconventional FK and low-cut filters.Keywords: ground roll, shaping filters, seismic data processing.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Sign in / Sign up

Export Citation Format

Share Document