scholarly journals Deblending by direct inversion

Geophysics ◽  
2012 ◽  
Vol 77 (3) ◽  
pp. A9-A12 ◽  
Author(s):  
Kees Wapenaar ◽  
Joost van der Neut ◽  
Jan Thorbecke

Deblending of simultaneous-source data is usually considered to be an underdetermined inverse problem, which can be solved by an iterative procedure, assuming additional constraints like sparsity and coherency. By exploiting the fact that seismic data are spatially band-limited, deblending of densely sampled sources can be carried out as a direct inversion process without imposing these constraints. We applied the method with numerically modeled data and it suppressed the crosstalk well, when the blended data consisted of responses to adjacent, densely sampled sources.

Geophysics ◽  
2021 ◽  
pp. 1-56
Author(s):  
Breno Bahia ◽  
Rongzhi Lin ◽  
Mauricio Sacchi

Denoisers can help solve inverse problems via a recently proposed framework known as regularization by denoising (RED). The RED approach defines the regularization term of the inverse problem via explicit denoising engines. Simultaneous source separation techniques, being themselves a combination of inversion and denoising methods, provide a formidable field to explore RED. We investigate the applicability of RED to simultaneous-source data processing and introduce a deblending algorithm named REDeblending (RDB). The formulation permits developing deblending algorithms where the user can select any denoising engine that satisfies RED conditions. Two popular denoisers are tested, but the method is not limited to them: frequency-wavenumber thresholding and singular spectrum analysis. We offer numerical blended data examples to showcase the performance of RDB via numerical experiments.


Geophysics ◽  
2004 ◽  
Vol 69 (6) ◽  
pp. 1560-1568 ◽  
Author(s):  
Bin Liu ◽  
Mauricio D. Sacchi

In seismic data processing, we often need to interpolate and extrapolate data at missing spatial locations. The reconstruction problem can be posed as an inverse problem where, from inadequate and incomplete data, we attempt to reconstruct the seismic wavefield at locations where measurements were not acquired. We propose a wavefield reconstruction scheme for spatially band‐limited signals. The method entails solving an inverse problem where a wavenumber‐domain regularization term is included. The regularization term constrains the solution to be spatially band‐limited and imposes a prior spectral shape. The numerical algorithm is quite efficient since the method of conjugate gradients in conjunction with fast matrix–vector multiplications, implemented via the fast Fourier transform (FFT), is adopted. The algorithm can be used to perform multidimensional reconstruction in any spatial domain.


2018 ◽  
Vol 37 (6) ◽  
pp. 471a1-471a11 ◽  
Author(s):  
David F. Halliday ◽  
Ian Moore

Separation algorithms for marine simultaneous-source data generally require encoded sources. Proposed encoding schemes include random time delays (time dithers), periodic time sequences (such as those referred to as seismic apparition), and periodic phase sequences (for sources with fully controlled phase like a marine vibrator). At a given frequency, time dithers spread energy at a given wavenumber over all wavenumbers, phase sequences shift the energy by a fixed wavenumber (independent of frequency), and time sequences split energy over multiple wavenumbers in a frequency-dependent way. The way the encoding scheme distributes energy in the wavenumber domain is important because separation algorithms generally assume that, in the absence of encoding, all energy falls into the signal cone. Time dithering allows separation by inversion. At low frequencies, the inverse problem is overdetermined and easily solved. At higher frequencies, sparse inversion works well, provided the data exhibit a sufficiently sparse representation (consistent with compressive sensing theory). Phase sequencing naturally separates the sources in the wavenumber domain at low frequencies. At higher frequencies, ambiguities must be resolved using assumptions such as limited dispersion and limited complexity. Time sequencing allows a simple separation at low frequencies based on a scaling and subtraction process in the wavenumber domain. However, the scaling becomes unstable near notch frequencies, including DC. At higher frequencies, a similar problem to that for phase sequencing must be solved. The encoding schemes, therefore, have similar overall properties and require similar assumptions, but differ in some potentially important details. Phase sequencing is clearly only applicable to phase-controllable sources, and the different encoding schemes have other implications for data acquisition, for example, with respect to operational complexity, efficiency, spatial sampling, and tolerance to errors.


2020 ◽  
Vol 48 (4) ◽  
pp. 45-111
Author(s):  
A. F. Shepetkin

A new algorithm for constructing orthogonal curvilinear grids on a sphere for a fairly general geometric shape of the modeling region is implemented as a “compile-once - use forever” software package. It is based on the numerical solution of the inverse problem by an iterative procedure -- finding such distribution of grid points along its perimeter, so that the conformal transformation of the perimeter into a rectangle turns this distribution into uniform one. The iterative procedure itself turns out to be multilevel - i.e. an iterative loop built around another, internal iterative procedure. Thereafter, knowing this distribution, the grid nodes inside the region are obtained solving an elliptic problem. It is shown that it was possible to obtain the exact orthogonality of the perimeter at the corners of the grid, to achieve very small, previously unattainable level of orthogonality errors, as well as make it isotropic -- local distances between grid nodes about both directions are equal to each other.


Geophysics ◽  
2016 ◽  
Vol 81 (6) ◽  
pp. A17-A21 ◽  
Author(s):  
Juan I. Sabbione ◽  
Mauricio D. Sacchi

The coefficients that synthesize seismic data via the hyperbolic Radon transform (HRT) are estimated by solving a linear-inverse problem. In the classical HRT, the computational cost of the inverse problem is proportional to the size of the data and the number of Radon coefficients. We have developed a strategy that significantly speeds up the implementation of time-domain HRTs. For this purpose, we have defined a restricted model space of coefficients applying hard thresholding to an initial low-resolution Radon gather. Then, an iterative solver that operated on the restricted model space was used to estimate the group of coefficients that synthesized the data. The method is illustrated with synthetic data and tested with a marine data example.


2018 ◽  
Vol 15 (1) ◽  
pp. 58-62 ◽  
Author(s):  
Weilin Huang ◽  
Runqiu Wang ◽  
Xiangbo Gong ◽  
Yangkang Chen

Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. V281-V293 ◽  
Author(s):  
Qiang Zhao ◽  
Qizhen Du ◽  
Xufei Gong ◽  
Xiangyang Li ◽  
Liyun Fu ◽  
...  

Simultaneous source acquisition has attracted more and more attention from geophysicists because of its cost savings, whereas it also brings some challenges that have never been addressed before. Deblending of simultaneous source data is usually considered as an underdetermined inverse problem, which can be effectively solved with a least-squares (LS) iterative procedure between data consistency ([Formula: see text]-norm) and regularization ([Formula: see text]-norm or [Formula: see text]-norm). However, when it comes to abnormal noise that follows non-Gaussian distribution and possesses high-amplitude features (e.g., erratic noise, swell noise, and power line noise), the [Formula: see text]-norm is a nonrobust statistic that can easily lead to suboptimal deblended results. Although abnormal noise can be attenuated in the common source domain at first, it is still challenging to apply a coherency-based filter due to the sparse receiver or crossline sampling, e.g., that commonly found in ocean bottom node (OBN) acquisition. To address this problem, we have developed a normalized shaping regularization to make the inversion-based deblending approach robust for the separation of blended data when abnormal noise exists. Its robustness comes from the normalized shaping operator defined by the confidence interval of normal distribution, which minimizes the abnormal risk to a normal level to satisfy the assumption of LS shaping regularization. In special cases, the proposed approach will revert to the classic LS shaping regularization once the normalized coefficient is large enough. Experimental results on synthetic and field data indicate that the proposed method can effectively restore the separated records from blended data at essentially the same convergence rate as the LS shaping regularization for the abnormal noise-free scenario, but it can obtain better deblending performance and less energy leakage when abnormal noise exists.


2013 ◽  
Vol 14 (2) ◽  
pp. 143-154
Author(s):  
Alexander Krainyukov ◽  
Valery Kutev

Problems of the data processing improving for pavement structure evaluation with help of subsurface radar probing are discussed. Iterative procedure to solve the inverse problem in frequency domain is used on the base of the genetic algorithm. For improving of data processing effectiveness it is proposed to use a modified genetic algorithm with adaptation of search range of pavement parameters. The results of reconstruction of electro-physical characteristics for model of five-layered pavement structure are presented.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. V1-V11 ◽  
Author(s):  
Amr Ibrahim ◽  
Mauricio D. Sacchi

We adopted the robust Radon transform to eliminate erratic incoherent noise that arises in common receiver gathers when simultaneous source data are acquired. The proposed robust Radon transform was posed as an inverse problem using an [Formula: see text] misfit that is not sensitive to erratic noise. The latter permitted us to design Radon algorithms that are capable of eliminating incoherent noise in common receiver gathers. We also compared nonrobust and robust Radon transforms that are implemented via a quadratic ([Formula: see text]) or a sparse ([Formula: see text]) penalty term in the cost function. The results demonstrated the importance of incorporating a robust misfit functional in the Radon transform to cope with simultaneous source interferences. Synthetic and real data examples proved that the robust Radon transform produces more accurate data estimates than least-squares and sparse Radon transforms.


2009 ◽  
Vol 2009 ◽  
pp. 1-11 ◽  
Author(s):  
Rolando Grave de Peralta ◽  
Olaf Hauk ◽  
Sara L. Gonzalez

A tomography of neural sources could be constructed from EEG/MEG recordings once the neuroelectromagnetic inverse problem (NIP) is solved. Unfortunately the NIP lacks a unique solution and therefore additional constraints are needed to achieve uniqueness. Researchers are then confronted with the dilemma of choosing one solution on the basis of the advantages publicized by their authors. This study aims to help researchers to better guide their choices by clarifying what is hidden behind inverse solutions oversold by their apparently optimal properties to localize single sources. Here, we introduce an inverse solution (ANA) attaining perfect localization of single sources to illustrate how spurious sources emerge and destroy the reconstruction of simultaneously active sources. Although ANA is probably the simplest and robust alternative for data generated by a single dominant source plus noise, the main contribution of this manuscript is to show that zero localization error of single sources is a trivial and largely uninformative property unable to predict the performance of an inverse solution in presence of simultaneously active sources. We recommend as the most logical strategy for solving the NIP the incorporation of sound additional a priori information about neural generators that supplements the information contained in the data.


Sign in / Sign up

Export Citation Format

Share Document