Multidimensional signature deconvolution and free‐surface multiple elimination of marine multicomponent ocean‐bottom seismic data

Geophysics ◽  
2001 ◽  
Vol 66 (5) ◽  
pp. 1594-1604 ◽  
Author(s):  
Lasse Amundsen ◽  
Luc T. Ikelle ◽  
Lars E. Berg

This paper presents a wave‐equation method for multidimensional signature deconvolution (designature) and elimination of free‐surface related multiples (demultiple) in four‐component (4C) ocean‐bottom seismic data. The designature/demultiple method has the following characteristics: it preserves primary amplitudes while attenuating free‐surface related multiples; it requires no knowledge of the sea floor‐parameters and the subsurface; it requires information only of the local density and acoustic wave propagation velocity just above the sea floor; it accommodates source arrays; and no information (except location) of the physical source array, its volume, and its radiation characteristics (wavelet) is required. Designature is an implicit part of the demultiple process; hence, the method is capable of transforming recorded reflection data excited by any source array below the sea surface into free‐surface demultipled data that would be recorded from a point source with any desired signature. In addition, the incident wavefield is not subtracted from the data prior to free‐surface demultiple; hence, separation of incident and scattered fields is not an issue as it is for most other free‐surface demultiple schemes. The designature/demultiple algorithm can be divided into two major computational steps. First, a multidimensional deconvolution operator, inversely proportional to the time derivative of the downgoing part of the normal component of the particle velocity just above the sea floor, is computed. Second, an integral equation is solved to find any component of the designatured, free‐surface demultipled multicomponent field. When the geology is horizontally layered, the designature and free‐surface demultiple scheme greatly simplifies and lends itself toward implementation in the τ‐p domain or frequency‐wavenumber domain as deterministic deconvolution of common shot gathers (or common receiver gathers when source array variations are negligible).

Geophysics ◽  
1999 ◽  
Vol 64 (2) ◽  
pp. 579-592 ◽  
Author(s):  
Luc T. Ikelle

Inverse scattering multiple attenuation (ISMA) is a method of removing free‐surface multiple energy while preserving primary energy. The other key feature of ISMA is that no knowledge of the subsurface is required in its application. I have adapted this method to multicomponent ocean‐bottom cable data (i.e., to arrays of sea‐floor geophones and hydrophones) by selecting a subseries made of even terms of the current scattering series used in the free‐surface multiple attenuation of conventional marine surface seismic data (streamer data). This subseries approach allows me to remove receiver ghosts (receiver‐side reverberations) and free‐surface multiples (source‐side reverberations) in multicomponent OBC data. I have processed each component separately. As for the streamer case, my OBC version of ISMA preserves primary energy and does not require any knowledge of the subsurface. Moreover, the preprocessing steps of muting for the direct wave and interpolating for missing near offsets are no longer needed. Knowledge of the source signature is still required. The existing ways of satisfying this requirement for streamer data can be used for OBC data without modification. This method differs from the present dual‐field deghosting method used in OBC data processing in that it does not assume a horizontally flat sea floor; nor does it require the knowledge of the acoustic impedance below the sea floor. Furthermore, it attenuates all free‐surface multiples, including receiver ghosts and source‐side reverberations.


Geophysics ◽  
2001 ◽  
Vol 66 (3) ◽  
pp. 953-963 ◽  
Author(s):  
Luc T. Ikelle

Marine vertical cable (VC) data contain primaries, receiver ghosts, free‐surface multiples, and internal multiples just like towed‐streamer data. However, the imaging of towed‐streamer data is based on primary reflections, while the emerging imaging algorithms of VC data tend to use the receiver ghosts of primary reflections instead of the primaries themselves. I present an algorithm for attenuating primaries, free‐surface multiples, and the receiver ghosts of free‐surface multiples while preserving the receiver ghosts of primaries. My multiple attenuation algorithm of VC data is based on an inverse scattering approach known, which is a predict‐then‐subtract method. It assumes that surface seismic data are available or that they can be computed from VC data after an up/down wavefield separation at the receiver locations (streamer data add to VC data some of the wave paths needed for multiple attenuation). The combination of surface seismic data with VC data allows one to predict free‐surface multiples and receiver ghosts as well as the receiver ghosts of primary reflections. However, if the direct wave arrivals are removed from the VC data, this combination will not predict the receiver ghosts of primary reflections. I use this property to attenuate primaries, free‐surface multiples, and receiver ghosts from VC data, preserving only the receiver ghosts of primaries. This method can be used for multicomponent ocean bottom cable data (i.e., arrays of sea‐floor geophones and hydrophones) without any modification to attenuate primaries, free‐surface multiples, and the receiver ghosts of free‐surface multiples while preserving the receiver ghosts of primaries.


Geophysics ◽  
2002 ◽  
Vol 67 (4) ◽  
pp. 1293-1303 ◽  
Author(s):  
Luc T. Ikelle ◽  
Lasse Amundsen ◽  
Seung Yoo

The inverse scattering multiple attenuation (ISMA) algorithm for ocean‐bottom seismic (OBS) data can be formulated in the form of a series expansion for each of the four components of OBS data. Besides the actual data, which constitute the first term of the series, each of the other terms is computed as a multidimensional convolution of OBS data with streamer data, and aims at removing one specific order of multiples. If the streamer data do not contain free‐surface multiples, we found that the computation of only the second term of the series is needed to predict and remove all orders of multiples, whatever the water depth. As the computation of the various terms of the series is the most expensive part of ISMA, this result can produce significant savings in computation time, even in data storage, as we no longer need to store the various terms of the series. For example, if the streamer data contained free‐surface multiples, OBS seismic data of 6‐s duration, corresponding to a geological model of the subsurface with 250‐m water depth, require the computation of five terms of the series for each of the four components of OBS data. With the new implementation, in which the streamer data do not contain free‐surface multiples, we need the computation of only one term of the series for each component of the OBS data. The saving in CPU time for this particular case is at least fourfold. The estimation of the inverse source signature, which is an essential part of ISMA, also benefits from the reduction of the number of terms needed for the demultiple to two because it becomes a linear inverse problem instead of a nonlinear one. Assuming that the removal of multiple events produces a significant reduction in the energy of the data, the optimization of this problem leads to a stable, noniterative analytic solution. We have also adapted these results to the implementation of ISMA for vertical‐cable (VC) data. This implementation is similar to that for OBS data. The key difference is that the basic model in VC imaging assumes that data consist of receiver ghosts of primaries instead of the primaries themselves. We have used the following property to achieve this goal. The combination of VC data with surface seismic data, which do not contain free‐surface multiples, allows us to predict free‐surface multiples and receiver ghosts as well as the receiver ghosts of primary reflections. However, if the direct wave arrivals are removed from the VC data, this combination will not predict the receiver ghosts of primary reflections. The difference between these two predictions produces data containing only receiver ghosts of primaries.


Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCA35-WCA45 ◽  
Author(s):  
Chaoshun Hu ◽  
Paul L. Stoffa

Subsurface images based on low-fold seismic reflection data or data with geometry acquisition limitations, such as obtained from ocean-bottom seismography (OBS), are often corrupted by migration swing artifacts. Incorporating prestack instantaneous slowness information into the imaging condition can significantly reduce these migration swing artifacts and improve image quality, especially for areas with poor illumination. We combine the horizontal surface slowness information of observed seismic data with Gaussian-beam depth migration to implement a new slowness-driven Gaussian-beam prestack depth migration whereby Fresnel weighting is combined naturally with beam summation. The prestack instantaneous slowness information is extracted from the original OBS or shot gathers using local slant stacks and is combined with a local semblance analysis. During migration, we propagate the seismic energy downward, knowing its instantaneous slowness information. At each image location, the beam summation is localized in a resolution-dependent Fresnel zone; the instantaneous slowness information controls the beam summation. Synthetic and real data examples confirm that slowness-driven Gaussian-beam migration can suppress most noise from inadequate stacking and give a clearer migration result.


2009 ◽  
Vol 57 (5) ◽  
pp. 785-802 ◽  
Author(s):  
Bärbel Traub ◽  
Anh Kiet Nguyen ◽  
Matthias Riede

Geophysics ◽  
1986 ◽  
Vol 51 (9) ◽  
pp. 1736-1742 ◽  
Author(s):  
Steven W. Belcher ◽  
Thomas L. Pratt ◽  
John K. Costain ◽  
Cahit Çoruh

The conventional procedure used to acquire Vibroseis® seismic reflection data is to sum in the field the contributions from several vibrator sources distributed over the source array. An alternative method of recording the data which provides more flexibility in the processing is to record the output from each pad position in the source array rather than summing in the field. Prewhitening these data before summing can improve the signal‐to‐noise (S/N) ratio. If cancellation of surface waves by a source array is not a requirement, then processing each sweep as a separate source point can result in increased lateral resolution. These procedures were applied to seismic data over a buried rift basin in the southeastern United States. The results demonstrate improvement in the S/N ratio and spatial resolution that enable better interpretation of the complex, internal geometry of the basin.


Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 327-341 ◽  
Author(s):  
Lasse Amundsen

This paper presents a new, wave‐equation based method for eliminating the effect of the free surface from marine seismic data without destroying primary amplitudes and without any knowledge of the subsurface. Compared with previously published methods which require an estimate of the source wavelet, the present method has the following characteristics: it does not require any information about the marine source array and its signature, it does not rely on removal of the direct wave from the data, and it does not require any explicit deghosting. Moreover, the effect of the source signature is removed from the data in the multiple elimination process by deterministic signature deconvolution, replacing the original source signature radiated from the marine source array with any desired wavelet (within the data frequency‐band) radiated from a monopole point source. The fundamental constraint of the new method is that the vertical derivative of the pressure or the vertical component of the particle velocity is input to the free‐surface demultiple process along with pressure recordings. These additional data are routinely recorded in ocean‐bottom seismic surveys. The method can be applied to conventional towed streamer pressure data recorded in the water column at a depth which is greater than the depth of the source array only when the pressure derivative can be estimated, or even better, is measured. Since the direct wave and its source ghost is part of the free‐ surface demultiple, designature process, the direct arrival must be properly measured for the method to work successfully. In the case when the geology is close to horizontally layering, the free‐surface multiple elimination method greatly simplifies, reducing to a well‐known deterministic deconvolution process which can be applied to common shot gathers (or common receiver gathers or common midpoint gathers when source array variations are negligible) in the τ-p domain or frequency‐wavenumber domain.


2010 ◽  
Vol 7 (2) ◽  
pp. 149-157 ◽  
Author(s):  
Xiang-Chun Wang ◽  
Chang-Liang Xia ◽  
Xue-Wei Liu

2021 ◽  
Author(s):  
Rick Schrynemeeckers

Abstract Current offshore hydrocarbon detection methods employ vessels to collect cores along transects over structures defined by seismic imaging which are then analyzed by standard geochemical methods. Due to the cost of core collection, the sample density over these structures is often insufficient to map hydrocarbon accumulation boundaries. Traditional offshore geochemical methods cannot define reservoir sweet spots (i.e. areas of enhanced porosity, pressure, or net pay thickness) or measure light oil or gas condensate in the C7 – C15 carbon range. Thus, conventional geochemical methods are limited in their ability to help optimize offshore field development production. The capability to attach ultrasensitive geochemical modules to Ocean Bottom Seismic (OBS) nodes provides a new capability to the industry which allows these modules to be deployed in very dense grid patterns that provide extensive coverage both on structure and off structure. Thus, both high resolution seismic data and high-resolution hydrocarbon data can be captured simultaneously. Field trials were performed in offshore Ghana. The trial was not intended to duplicate normal field operations, but rather provide a pilot study to assess the viability of passive hydrocarbon modules to function properly in real world conditions in deep waters at elevated pressures. Water depth for the pilot survey ranged from 1500 – 1700 meters. Positive thermogenic signatures were detected in the Gabon samples. A baseline (i.e. non-thermogenic) signature was also detected. The results indicated the positive signatures were thermogenic and could easily be differentiated from baseline or non-thermogenic signatures. The ability to deploy geochemical modules with OBS nodes for reoccurring surveys in repetitive locations provides the ability to map the movement of hydrocarbons over time as well as discern depletion affects (i.e. time lapse geochemistry). The combined technologies will also be able to: Identify compartmentalization, maximize production and profitability by mapping reservoir sweet spots (i.e. areas of higher porosity, pressure, & hydrocarbon richness), rank prospects, reduce risk by identifying poor prospectivity areas, accurately map hydrocarbon charge in pre-salt sequences, augment seismic data in highly thrusted and faulted areas.


Sign in / Sign up

Export Citation Format

Share Document