Normalized shaping regularization for robust separation of blended data

Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. V281-V293 ◽  
Author(s):  
Qiang Zhao ◽  
Qizhen Du ◽  
Xufei Gong ◽  
Xiangyang Li ◽  
Liyun Fu ◽  
...  

Simultaneous source acquisition has attracted more and more attention from geophysicists because of its cost savings, whereas it also brings some challenges that have never been addressed before. Deblending of simultaneous source data is usually considered as an underdetermined inverse problem, which can be effectively solved with a least-squares (LS) iterative procedure between data consistency ([Formula: see text]-norm) and regularization ([Formula: see text]-norm or [Formula: see text]-norm). However, when it comes to abnormal noise that follows non-Gaussian distribution and possesses high-amplitude features (e.g., erratic noise, swell noise, and power line noise), the [Formula: see text]-norm is a nonrobust statistic that can easily lead to suboptimal deblended results. Although abnormal noise can be attenuated in the common source domain at first, it is still challenging to apply a coherency-based filter due to the sparse receiver or crossline sampling, e.g., that commonly found in ocean bottom node (OBN) acquisition. To address this problem, we have developed a normalized shaping regularization to make the inversion-based deblending approach robust for the separation of blended data when abnormal noise exists. Its robustness comes from the normalized shaping operator defined by the confidence interval of normal distribution, which minimizes the abnormal risk to a normal level to satisfy the assumption of LS shaping regularization. In special cases, the proposed approach will revert to the classic LS shaping regularization once the normalized coefficient is large enough. Experimental results on synthetic and field data indicate that the proposed method can effectively restore the separated records from blended data at essentially the same convergence rate as the LS shaping regularization for the abnormal noise-free scenario, but it can obtain better deblending performance and less energy leakage when abnormal noise exists.

Geophysics ◽  
2012 ◽  
Vol 77 (3) ◽  
pp. A9-A12 ◽  
Author(s):  
Kees Wapenaar ◽  
Joost van der Neut ◽  
Jan Thorbecke

Deblending of simultaneous-source data is usually considered to be an underdetermined inverse problem, which can be solved by an iterative procedure, assuming additional constraints like sparsity and coherency. By exploiting the fact that seismic data are spatially band-limited, deblending of densely sampled sources can be carried out as a direct inversion process without imposing these constraints. We applied the method with numerically modeled data and it suppressed the crosstalk well, when the blended data consisted of responses to adjacent, densely sampled sources.


Geophysics ◽  
2021 ◽  
pp. 1-56
Author(s):  
Breno Bahia ◽  
Rongzhi Lin ◽  
Mauricio Sacchi

Denoisers can help solve inverse problems via a recently proposed framework known as regularization by denoising (RED). The RED approach defines the regularization term of the inverse problem via explicit denoising engines. Simultaneous source separation techniques, being themselves a combination of inversion and denoising methods, provide a formidable field to explore RED. We investigate the applicability of RED to simultaneous-source data processing and introduce a deblending algorithm named REDeblending (RDB). The formulation permits developing deblending algorithms where the user can select any denoising engine that satisfies RED conditions. Two popular denoisers are tested, but the method is not limited to them: frequency-wavenumber thresholding and singular spectrum analysis. We offer numerical blended data examples to showcase the performance of RDB via numerical experiments.


2015 ◽  
Vol 2015 ◽  
pp. 1-20
Author(s):  
Wanyang Dai

We prove the global risk optimality of the hedging strategy of contingent claim, which is explicitly (or called semiexplicitly) constructed for an incomplete financial market with external risk factors of non-Gaussian Ornstein-Uhlenbeck (NGOU) processes. Analytical and numerical examples are both presented to illustrate the effectiveness of our optimal strategy. Our study establishes the connection between our financial system and existing general semimartingale based discussions by justifying required conditions. More precisely, there are three steps involved. First, we firmly prove the no-arbitrage condition to be true for our financial market, which is used as an assumption in existing discussions. In doing so, we explicitly construct the square-integrable density process of the variance-optimal martingale measure (VOMM). Second, we derive a backward stochastic differential equation (BSDE) with jumps for the mean-value process of a given contingent claim. The unique existence of adapted strong solution to the BSDE is proved under suitable terminal conditions including both European call and put options as special cases. Third, by combining the solution of the BSDE and the VOMM, we reach the justification of the global risk optimality for our hedging strategy.


2015 ◽  
Vol 28 (23) ◽  
pp. 9166-9187 ◽  
Author(s):  
Prashant D. Sardeshmukh ◽  
Gilbert P. Compo ◽  
Cécile Penland

Abstract Given the reality of anthropogenic global warming, it is tempting to seek an anthropogenic component in any recent change in the statistics of extreme weather. This paper cautions that such efforts may, however, lead to wrong conclusions if the distinctively skewed and heavy-tailed aspects of the probability distributions of daily weather anomalies are ignored or misrepresented. Departures of several standard deviations from the mean, although rare, are far more common in such a distinctively non-Gaussian world than they are in a Gaussian world. This further complicates the problem of detecting changes in tail probabilities from historical records of limited length and accuracy. A possible solution is to exploit the fact that the salient non-Gaussian features of the observed distributions are captured by so-called stochastically generated skewed (SGS) distributions that include Gaussian distributions as special cases. SGS distributions are associated with damped linear Markov processes perturbed by asymmetric stochastic noise and as such represent the simplest physically based prototypes of the observed distributions. The tails of SGS distributions can also be directly linked to generalized extreme value (GEV) and generalized Pareto (GP) distributions. The Markov process model can be used to provide rigorous confidence intervals and to investigate temporal persistence statistics. The procedure is illustrated for assessing changes in the observed distributions of daily wintertime indices of large-scale atmospheric variability in the North Atlantic and North Pacific sectors over the period 1872–2011. No significant changes in these indices are found from the first to the second half of the period.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. V213-V225 ◽  
Author(s):  
Shaohuan Zu ◽  
Hui Zhou ◽  
Yangkang Chen ◽  
Shan Qu ◽  
Xiaofeng Zou ◽  
...  

We have designed a periodically varying code that can avoid the problem of the local coherency and make the interference distribute uniformly in a given range; hence, it was better at suppressing incoherent interference (blending noise) and preserving coherent useful signals compared with a random dithering code. We have also devised a new form of the iterative method to remove interference generated from the simultaneous source acquisition. In each iteration, we have estimated the interference using the blending operator following the proposed formula and then subtracted the interference from the pseudodeblended data. To further eliminate the incoherent interference and constrain the inversion, the data were then transformed to an auxiliary sparse domain for applying a thresholding operator. During the iterations, the threshold was decreased from the largest value to zero following an exponential function. The exponentially decreasing threshold aimed to gradually pass the deblended data to a more acceptable model subspace. Two numerically blended synthetic data sets and one numerically blended practical field data set from an ocean bottom cable were used to demonstrate the usefulness of our proposed method and the better performance of the periodically varying code over the traditional random dithering code.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. V123-V131 ◽  
Author(s):  
Shoudong Huo ◽  
Yi Luo ◽  
Panos G. Kelamis

Simultaneous source acquisition technology, also referred to as “blended acquisition,” involves recording two or more shots simultaneously. Despite the fact that the recorded data has crosstalk from different shots, conventional processing procedures can still produce acceptable images for interpretation. This is due to the power of the stacking process using blended data with its increased data redundancy and inherent time delays between various shots. It is still desirable to separate the blended data into single shot gathers and reduce the crosstalk noise to achieve the highest seismic image quality and for standard prestack processing, such as filtering, statics computation, and velocity analysis. This study introduced a new and simple multidirectional vector-median filter (MD-VMF) to separate the blended seismic shot gathers. This method extended the well-known conventional median filter from a scalar implementation to a vector version. More specifically, a vector median filter was applied in many trial directions and the median vector was chosen from among these. We demonstrated the effectiveness of our proposed MD-VMF on simulated data generated by blending synthetic and real marine seismic data.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. V219-V231 ◽  
Author(s):  
Junhai Cao ◽  
Eric Verschuur ◽  
Hanming Gu ◽  
Lie Li

Blended or simultaneous source shooting is becoming more widely used in seismic exploration and monitoring, which can provide significant uplift in terms of acquisition quality and economic efficiency. Effective deblending techniques are essential to make use of existing processing and imaging methodologies. When dealing with coarse and/or irregularly sampled blended data, the aliasing noise of incomplete data will affect the deblending process and the crosstalk in the blended data will also have a negative influence on the process of data reconstruction. Thus, we have developed a joint deblending and data-reconstruction method using the double-focal transformation to eliminate blending noise and aliasing noise in the coarse, blended data. Numerically blended synthetic and field-data examples demonstrate the validity of its application for deblending and data reconstruction. We also investigate the effect of random noise on the recovery process, and it shows that the algorithm would obtain optimum results after applying a denoising process before deblending and data reconstruction.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. B55-B64 ◽  
Author(s):  
Laust B. Pedersen ◽  
Mehrdad Bastani

Poisson’s theorem relating components of the magnetic field to components of the gradient of the gravity vector assuming a common source has been cast into a general form. A given magnetization distribution in the terrain or in the underlying crust is propagated into the corresponding magnetic field through the gravity gradient tensor. Conversely, measured magnetic field anomalies and measured gravity gradient tensor anomalies can be used to estimate the unknown magnetization vectors without knowledge of the geometry of the sources. We have tested the method on recently acquired data over a greenstone belt in Northern Sweden. The topographic relief was sufficiently variable to dominate the measured gravity gradient tensor. In practice, we have concentrated on areas where the norm of the gravity gradient tensor reached a maximum so that there was a better chance of identifying isolated sources with well-defined density and magnetization. We have surrounded the selected points by a small window and used all the data lying within that window to estimate the magnetization vectors. We have compared the estimated amplitudes and directions of magnetization with those measured from selected rock samples in the area and found a relatively modest agreement. We have interpreted this as a result of two effects: (1) Measured magnetizations are generally lower than those estimated by this method, and we believe that this is related to the fact that the collection of samples in the field is biased because of a small number of outcrops in most parts of the area. (2) This analysis is biased toward high-amplitude magnetic anomalies; i.e., the estimation procedure works best for high-amplitude magnetic anomalies, in which case, the influence of neighboring anomalies is reduced. The estimated magnetization directions show a strong dominance of remanent magnetization over induced magnetization in agreement with laboratory measurements on rock samples from the area.


Geophysics ◽  
2005 ◽  
Vol 70 (2) ◽  
pp. S43-S59 ◽  
Author(s):  
Egil Holvik ◽  
Lasse Amundsen

This paper shows that Betti's reciprocity theorem gives an integral equation procedure to eliminate from the physical multicomponent-source, multicomponent-receiver seismic measurements the effect of the physical source radiation pattern and the response of the physical overburden (that is, the medium above the receiver plane). The physical multicomponent sources are assumed to be orthogonally aligned anywhere above the multicomponent-receiver depth level. Other than the position of the sources, no source characteristics are required. The method, denoted the Betti designature/elastic demultiple, has the following additional characteristics: it preserves primary amplitudes while eliminating all waves scattered from the overburden; it requires no knowledge of the medium below the receiver level; it requires no knowledge of the medium above the receiver level; it requires information only of the local density and elastic wave propagation velocities at the receiver level to decompose the physical seismic measurements into upgoing and downgoing waves. Following the Betti designature/elastic demultiple step is an elastic wavefield decomposition step that decomposes the data into PP-, PS-, SP-, and SS-wave responses that would be recorded from idealized compressional-wave and shear-wave sources and receivers. The combined elastic wavefield decomposition on the source and receiver side gives data equivalent to data from a hypothetical survey with overburden absent, with single-component compressional and shear-wave sources, and single-component compressional and shear-wave receivers. When the medium is horizontally layered, the Betti designature/elastic demultiple scheme followed by the elastic source-receiver decomposition scheme greatly simplifies and is conveniently implemented as deterministic multidimensional deconvolution and elastic source-receiver wavefield decomposition of common-source gathers (or common-receiver gathers when source array variations are negligible). Betti designature/elastic demultiple followed by source-receiver wavefield decomposition applies to three different seismic experiments: a 9-component (9C) land seismic experiment, a 12-component (12C) ocean-bottom seismic experiment, and an 18-component (18C) borehole seismic experiment. For the land and ocean-bottom seismic experiments, an additional geophone should be deployed below the zero-offset geophone to predict the source-induced vertical traction vector at the source location. A numerical example for the 12C ocean-bottom seismic experiment over a horizontally layered medium validates the Betti designature/elastic demultiple scheme.


Sign in / Sign up

Export Citation Format

Share Document