scholarly journals Interferometric identification of surface-related multiples

Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. Q41-Q52 ◽  
Author(s):  
Boris Boullenger ◽  
Deyan Draganov

The theory of seismic interferometry predicts that crosscorrelations of recorded seismic responses at two receivers yield an estimate of the interreceiver seismic response. The interferometric process applied to surface-reflection data involves the summation, over sources, of crosscorrelated traces, and it allows retrieval of an estimate of the interreceiver reflection response. In particular, the crosscorrelations of the data with surface-related multiples in the data produce the retrieval of pseudophysical reflections (virtual events with the same kinematics as physical reflections in the original data). Thus, retrieved pseudophysical reflections can provide feedback information about the surface multiples. From this perspective, we have developed a data-driven interferometric method to detect and predict the arrival times of surface-related multiples in recorded reflection data using the retrieval of virtual data as diagnosis. The identification of the surface multiples is based on the estimation of source positions in the stationary-phase regions of the retrieved pseudophysical reflections, thus not necessarily requiring sources and receivers on the same grid. We have evaluated the method of interferometric identification with a two-layer acoustic example and tested it on a more complex synthetic data set. The results determined that we are able to identify the prominent surface multiples in a large range of the reflection data. Although missing near offsets proved to cause major problems in multiple-prediction schemes based on convolutions and inversions, missing near offsets does not impede our method from identifying surface multiples. Such interferometric diagnosis could be used to control the effectiveness of conventional multiple-removal schemes, such as adaptive subtraction of multiples predicted by convolution of the data.

Geophysics ◽  
2005 ◽  
Vol 70 (6) ◽  
pp. S111-S120
Author(s):  
Fabio Rocca ◽  
Massimiliano Vassallo ◽  
Giancarlo Bernasconi

Seismic depth migration back-propagates seismic data in the correct depth position using information about the velocity of the medium. Usually, Kirchhoff summation is the preferred migration procedure for seismic-while-drilling (SWD) data because it can handle virtually any configuration of sources and receivers and one can compensate for irregular spatial sampling of the array elements (receivers and sources). Under the assumption of a depth-varying velocity model, with receivers arranged along a horizontal circumference and sources placed along the central vertical axis, we reformulate the Kirchhoff summation in the angular frequency domain. In this way, the migration procedure becomes very efficient because the migrated volume is obtained by an inverse Fourier transform of the weighted data. The algorithm is suitable for 3D SWD acquisitions when the aforementioned hypothesis holds. We show migration tests on SWD synthetic data, and we derive solutions to reduce the migration artifacts and to control aliasing. The procedure is also applied on a real 3D SWD data set. The result compares satisfactorily with the seismic stack section obtained from surface reflection data and with the results from traditional Kirchhoff migration.


Author(s):  
Danlei Xu ◽  
Lan Du ◽  
Hongwei Liu ◽  
Penghui Wang

A Bayesian classifier for sparsity-promoting feature selection is developed in this paper, where a set of nonlinear mappings for the original data is performed as a pre-processing step. The linear classification model with such mappings from the original input space to a nonlinear transformation space can not only construct the nonlinear classification boundary, but also realize the feature selection for the original data. A zero-mean Gaussian prior with Gamma precision and a finite approximation of Beta process prior are used to promote sparsity in the utilization of features and nonlinear mappings in our model, respectively. We derive the Variational Bayesian (VB) inference algorithm for the proposed linear classifier. Experimental results based on the synthetic data set, measured radar data set, high-dimensional gene expression data set, and several benchmark data sets demonstrate the aggressive and robust feature selection capability and comparable classification accuracy of our method comparing with some other existing classifiers.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. Q27-Q40 ◽  
Author(s):  
Katrin Löer ◽  
Andrew Curtis ◽  
Giovanni Angelo Meles

We have evaluated an explicit relationship between the representations of internal multiples by source-receiver interferometry and an inverse-scattering series. This provides a new insight into the interaction of different terms in each of these internal multiple prediction equations and explains why amplitudes of estimated multiples are typically incorrect. A downside of the existing representations is that their computational cost is extremely high, which can be a precluding factor especially in 3D applications. Using our insight from source-receiver interferometry, we have developed an alternative, computationally more efficient way to predict internal multiples. The new formula is based on crosscorrelation and convolution: two operations that are computationally cheap and routinely used in interferometric methods. We have compared the results of the standard and the alternative formulas qualitatively in terms of the constructed wavefields and quantitatively in terms of the computational cost using examples from a synthetic data set.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Stefan Lenz ◽  
Moritz Hess ◽  
Harald Binder

Abstract Background The best way to calculate statistics from medical data is to use the data of individual patients. In some settings, this data is difficult to obtain due to privacy restrictions. In Germany, for example, it is not possible to pool routine data from different hospitals for research purposes without the consent of the patients. Methods The DataSHIELD software provides an infrastructure and a set of statistical methods for joint, privacy-preserving analyses of distributed data. The contained algorithms are reformulated to work with aggregated data from the participating sites instead of the individual data. If a desired algorithm is not implemented in DataSHIELD or cannot be reformulated in such a way, using artificial data is an alternative. Generating artificial data is possible using so-called generative models, which are able to capture the distribution of given data. Here, we employ deep Boltzmann machines (DBMs) as generative models. For the implementation, we use the package “BoltzmannMachines” from the Julia programming language and wrap it for use with DataSHIELD, which is based on R. Results We present a methodology together with a software implementation that builds on DataSHIELD to create artificial data that preserve complex patterns from distributed individual patient data. Such data sets of artificial patients, which are not linked to real patients, can then be used for joint analyses. As an exemplary application, we conduct a distributed analysis with DBMs on a synthetic data set, which simulates genetic variant data. Patterns from the original data can be recovered in the artificial data using hierarchical clustering of the virtual patients, demonstrating the feasibility of the approach. Additionally, we compare DBMs, variational autoencoders, generative adversarial networks, and multivariate imputation as generative approaches by assessing the utility and disclosure of synthetic data generated from real genetic variant data in a distributed setting with data of a small sample size. Conclusions Our implementation adds to DataSHIELD the ability to generate artificial data that can be used for various analyses, e.g., for pattern recognition with deep learning. This also demonstrates more generally how DataSHIELD can be flexibly extended with advanced algorithms from languages other than R.


Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. V257-V274
Author(s):  
Necati Gülünay

The diminishing residual matrices (DRM) method can be used to surface-consistently decompose individual trace statics into source and receiver components. The statics to be decomposed may either be first-arrival times after the application of linear moveout associated with a consistent refractor as used in refraction statics or residual statics obtained by crosscorrelating individual traces with corresponding model traces (known as pilot traces) at the same common-midpoint (CMP) location. The DRM method is an iterative process like the well-known Gauss-Seidel (GS) method, but it uses only source and receiver terms. The DRM method differs from the GS method in that half of the average common shot and receiver terms are subtracted simultaneously from the observations at each iteration. DRM makes the under-constrained statics problem a constrained one by implicitly adding a new constraint, the equality of the contribution of shots and receivers to the solution. The average of the shot statics and the average of the receiver statics are equal in the DRM solution. The solution has the smallest difference between shot and receiver statics profiles when the number of shots and the number of receivers in the data are equal. In this case, it is also the smallest norm solution. The DRM method can be derived from the well-known simultaneous iterative reconstruction technique. Simple numerical tests as well as results obtained with a synthetic data set containing only the field statics verify that the DRM solution is the same as the linear inverse theory solution. Both algorithms can solve for the long-wavelength component of the statics if the individual picks contain them. Yet DRM method is much faster. Application of the method to the normal moveout-corrected CMP gathers on a 3D land survey for residual statics calculation found that pick-decompose-apply-stack stages of the DRM method need to be iterated. These iterations are needed because of time and waveform distortions of the pilot traces due to the individual trace statics. The distortions lessen at every external DRM iteration.


Geophysics ◽  
2016 ◽  
Vol 81 (2) ◽  
pp. Q15-Q26 ◽  
Author(s):  
Giovanni Angelo Meles ◽  
Kees Wapenaar ◽  
Andrew Curtis

State-of-the-art methods to image the earth’s subsurface using active-source seismic reflection data involve reverse time migration. This and other standard seismic processing methods such as velocity analysis provide best results only when all waves in the data set are primaries (waves reflected only once). A variety of methods are therefore deployed as processing to predict and remove multiples (waves reflected several times); however, accurate removal of those predicted multiples from the recorded data using adaptive subtraction techniques proves challenging, even in cases in which they can be predicted with reasonable accuracy. We present a new, alternative strategy to construct a parallel data set consisting only of primaries, which is calculated directly from recorded data. This obviates the need for multiple prediction and removal methods. Primaries are constructed by using convolutional interferometry to combine the first-arriving events of upgoing and direct-wave downgoing Green’s functions to virtual receivers in the subsurface. The required upgoing wavefields to virtual receivers are constructed by Marchenko redatuming. Crucially, this is possible without detailed models of the earth’s subsurface reflectivity structure: Similar to the most migration techniques, the method only requires surface reflection data and estimates of direct (nonreflected) arrivals between the virtual subsurface sources and the acquisition surface. We evaluate the method on a stratified synclinal model. It is shown to be particularly robust against errors in the reference velocity model used and to improve the migrated images substantially.


Geophysics ◽  
2017 ◽  
Vol 82 (2) ◽  
pp. Q1-Q12 ◽  
Author(s):  
Carlos Alberto da Costa Filho ◽  
Giovanni Angelo Meles ◽  
Andrew Curtis

Conventional seismic processing aims to create data that contain only primary reflections, whereas real seismic recordings also contain multiples. As such, it is desirable to predict, identify, and attenuate multiples in seismic data. This task is more difficult in elastic (solid) media because mode conversions create families of internal multiples not present in the acoustic case. We have developed a method to predict prestack internal multiples in general elastic media based on the Marchenko method and convolutional interferometry. It can be used to identify multiples directly in prestack data or migrated sections, as well as to attenuate internal multiples by adaptively subtracting them from the original data set. We developed the method on two synthetic data sets, the first composed of horizontal density layers and constant velocities, and the second containing horizontal and vertical density and velocity variations. The full-elastic method is computationally expensive and ideally uses data components that are not usually recorded. We therefore tested an acoustic approximation to the method on the synthetic elastic data from the second model and find that although the spatial resolution of the resulting image is reduced by this approximation, it provides images with relatively fewer artifacts. We conclude that in most cases where cost is a factor and we are willing to sacrifice some resolution, it may be sufficient to apply the acoustic version of this demultiple method.


Geophysics ◽  
2007 ◽  
Vol 72 (4) ◽  
pp. J31-J41 ◽  
Author(s):  
James D. Irving ◽  
Michael D. Knoll ◽  
Rosemary J. Knight

To obtain the highest-resolution ray-based tomographic images from crosshole ground-penetrating radar (GPR) data, wide angular ray coverage of the region between the two boreholes is required. Unfortunately, at borehole spacings on the order of a few meters, high-angle traveltime data (i.e., traveltime data corresponding to transmitter-receiver angles greater than approximately 50° from the horizontal) are notoriously difficult to incorporate into crosshole GPR inversions. This is because (1) low signal-to-noise ratios make the accurate picking of first-arrival times at high angles extremely difficult, and (2) significant tomographic artifacts commonly appear when high- and low-angle ray data are inverted together. We address and overcome thesetwo issues for a crosshole GPR data example collected at the Boise Hydrogeophysical Research Site (BHRS). To estimate first-arrival times on noisy, high-angle gathers, we develop a robust and automatic picking strategy based on crosscorrelations, where reference waveforms are determined from the data through the stacking of common-ray-angle gathers. To overcome incompatibility issues between high- and low-angle data, we modify the standard tomographic inversion strategy to estimate, in addition to subsurface velocities, parameters that describe a traveltime ‘correction curve’ as a function of angle. Application of our modified inversion strategy, to both synthetic data and the BHRS data set, shows that it allows the successful incorporation of all available traveltime data to obtain significantly improved subsurface velocity images.


Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. S1-S17 ◽  
Author(s):  
Alison E. Malcolm ◽  
Maarten V. de Hoop ◽  
Jérôme H. Le Rousseau

Reflection seismic data continuation is the computation of data at source and receiver locations that differ from those in the original data, using whatever data are available. We develop a general theory of data continuation in the presence of caustics and illustrate it with three examples: dip moveout (DMO), azimuth moveout (AMO), and offset continuation. This theory does not require knowledge of the reflector positions. We construct the output data set from the input through the composition of three operators: an imaging operator, a modeling operator, and a restriction operator. This results in a single operator that maps directly from the input data to the desired output data. We use the calculus of Fourier integral operators to develop this theory in the presence of caustics. For both DMO and AMO, we compute impulse responses in a constant-velocity model and in a more complicated model in which caustics arise. This analysis reveals errors that can be introduced by assuming, for example, a model with a constant vertical velocity gradient when the true model is laterally heterogeneous. Data continuation uses as input a subset (common offset, common angle) of the available data, which may introduce artifacts in the continued data. One could suppress these artifacts by stacking over a neighborhood of input data (using a small range of offsets or angles, for example). We test data continuation on synthetic data from a model known to generate imaging artifacts. We show that stacking over input scattering angles suppresses artifacts in the continued data.


Geophysics ◽  
2019 ◽  
Vol 84 (1) ◽  
pp. A7-A11 ◽  
Author(s):  
Lele Zhang ◽  
Evert Slob

We have derived a scheme for retrieving the primary reflections from the acoustic surface-reflection response by eliminating the free-surface and internal multiple reflections in one step. This scheme does not require model information and adaptive subtraction. It consists only of the reflection response as a correlation and convolution operator that acts on an intermediate wavefield from which we compute and capture the primary reflections. For each time instant, we keep one value for each source-receiver pair and store it in the new data set. The resulting data set contains only primary reflections, and from this data set, a better velocity model can be built than from the original data set. A conventional migration scheme can then be used to compute an artifact-free image of the medium. We evaluated the success of the method with a 2D numerical example. The method can have a wide range of applications in 3D strongly scattering media that are accessible from one side only.


Sign in / Sign up

Export Citation Format

Share Document