Simultaneous deconvolution and wavelet inversion as a global optimization

Geophysics ◽  
1999 ◽  
Vol 64 (4) ◽  
pp. 1108-1115 ◽  
Author(s):  
Warren T. Wood

Estimates of the source wavelet and band‐limited earth reflectivity are obtained simultaneously from an optimization of deconvolution outputs, similar to minimum‐entropy deconvolution (MED). The only inputs required beyond the observed seismogram are wavelet length and an inversion parameter (cooling rate). The objective function to be minimized is a measure of the spikiness of the deconvolved seismogram. I assume that the wavelet whose deconvolution from the data results in the most spike‐like trace is the best wavelet estimate. Because this is a highly nonlinear problem, simulated annealing is used to solve it. The procedure yields excellent results on synthetic data and disparate field data sets, is robust in the presence of noise, and is fast enough to operate in a desktop computer environment.

Geophysics ◽  
2017 ◽  
Vol 82 (5) ◽  
pp. W31-W45 ◽  
Author(s):  
Necati Gülünay

The old technology [Formula: see text]-[Formula: see text] deconvolution stands for [Formula: see text]-[Formula: see text] domain prediction filtering. Early versions of it are known to create signal leakage during their application. There have been recent papers in geophysical publications comparing [Formula: see text]-[Formula: see text] deconvolution results with the new technologies being proposed. These comparisons will be most effective if the best existing [Formula: see text]-[Formula: see text] deconvolution algorithms are used. This paper describes common [Formula: see text]-[Formula: see text] deconvolution algorithms and studies signal leakage occurring during their application on simple models, which will hopefully provide a benchmark for the readers in choosing [Formula: see text]-[Formula: see text] algorithms for comparison. The [Formula: see text]-[Formula: see text] deconvolution algorithms can be classified by their use of data which lead to transient or transient-free matrices and hence windowed or nonwindowed autocorrelations, respectively. They can also be classified by the direction they are predicting: forward design and apply; forward design and apply followed by backward design and apply; forward design and apply followed by application of a conjugated forward filter in the backward direction; and simultaneously forward and backward design and apply, which is known as noncausal filter design. All of the algorithm types mentioned above are tested, and the results of their analysis are provided in this paper on noise free and noisy synthetic data sets: a single dipping event, a single dipping event with a simple amplitude variation with offset, and three dipping events. Finally, the results of applying the selected algorithms on field data are provided.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. V213-V225 ◽  
Author(s):  
Shaohuan Zu ◽  
Hui Zhou ◽  
Yangkang Chen ◽  
Shan Qu ◽  
Xiaofeng Zou ◽  
...  

We have designed a periodically varying code that can avoid the problem of the local coherency and make the interference distribute uniformly in a given range; hence, it was better at suppressing incoherent interference (blending noise) and preserving coherent useful signals compared with a random dithering code. We have also devised a new form of the iterative method to remove interference generated from the simultaneous source acquisition. In each iteration, we have estimated the interference using the blending operator following the proposed formula and then subtracted the interference from the pseudodeblended data. To further eliminate the incoherent interference and constrain the inversion, the data were then transformed to an auxiliary sparse domain for applying a thresholding operator. During the iterations, the threshold was decreased from the largest value to zero following an exponential function. The exponentially decreasing threshold aimed to gradually pass the deblended data to a more acceptable model subspace. Two numerically blended synthetic data sets and one numerically blended practical field data set from an ocean bottom cable were used to demonstrate the usefulness of our proposed method and the better performance of the periodically varying code over the traditional random dithering code.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. S87-S100 ◽  
Author(s):  
Hao Hu ◽  
Yike Liu ◽  
Yingcai Zheng ◽  
Xuejian Liu ◽  
Huiyi Lu

Least-squares migration (LSM) can be effective to mitigate the limitation of finite-seismic acquisition, balance the subsurface illumination, and improve the spatial resolution of the image, but it requires iterations of migration and demigration to obtain the desired subsurface reflectivity model. The computational efficiency and accuracy of migration and demigration operators are crucial for applying the algorithm. We have developed a test of the feasibility of using the Gaussian beam as the wavefield extrapolating operator for the LSM, denoted as least-squares Gaussian beam migration. Our method combines the advantages of the LSM and the efficiency of the Gaussian beam propagator. Our numerical evaluations, including two synthetic data sets and one marine field data set, illustrate that the proposed approach could be used to obtain amplitude-balanced images and to broaden the bandwidth of the migrated images in particular for the low-wavenumber components.


Geophysics ◽  
1994 ◽  
Vol 59 (6) ◽  
pp. 938-945 ◽  
Author(s):  
Mauricio D. Sacchi ◽  
Danilo R. Velis ◽  
Alberto H. Comínguez

A method for reconstructing the reflectivity spectrum using the minimum entropy criterion is presented. The algorithm (FMED) described is compared with the classical minimum entropy deconvolution (MED) as well as with the linear programming (LP) and autoregressive (AR) approaches. The MED is performed by maximizing an entropy norm with respect to the coefficients of a linear operator that deconvolves the seismic trace. By comparison, the approach presented here maximizes the norm with respect to the missing frequencies of the reflectivity series spectrum. This procedure reduces to a nonlinear algorithm that is able to carry out the deconvolution of band‐limited data, avoiding the inherent limitations of linear operators. The proposed method is illustrated under a variety of synthetic examples. Field data are also used to test the algorithm. The results show that the proposed method is an effective way to process band‐limited data. The FMED and the LP arise from similar conceptions. Both methods seek an extremum of a particular norm subjected to frequency constraints. In the LP approach, the linear programming problem is solved using an adaptation of the simplex method, which is a very expensive procedure. The FMED uses only two fast Fourier transforms (FFTs) per iteration; hence, the computational cost of the inversion is reduced.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. S197-S205 ◽  
Author(s):  
Zhaolun Liu ◽  
Abdullah AlTheyab ◽  
Sherif M. Hanafy ◽  
Gerard Schuster

We have developed a methodology for detecting the presence of near-surface heterogeneities by naturally migrating backscattered surface waves in controlled-source data. The near-surface heterogeneities must be located within a depth of approximately one-third the dominant wavelength [Formula: see text] of the strong surface-wave arrivals. This natural migration method does not require knowledge of the near-surface phase-velocity distribution because it uses the recorded data to approximate the Green’s functions for migration. Prior to migration, the backscattered data are separated from the original records, and the band-passed filtered data are migrated to give an estimate of the migration image at a depth of approximately one-third [Formula: see text]. Each band-passed data set gives a migration image at a different depth. Results with synthetic data and field data recorded over known faults validate the effectiveness of this method. Migrating the surface waves in recorded 2D and 3D data sets accurately reveals the locations of known faults. The limitation of this method is that it requires a dense array of receivers with a geophone interval less than approximately one-half [Formula: see text].


Geophysics ◽  
2008 ◽  
Vol 73 (1) ◽  
pp. R1-R9 ◽  
Author(s):  
Danilo R. Velis

Sparse-spike deconvolution can be viewed as an inverse problem where the locations and amplitudes of a number of spikes (reflectivity) are estimated from noisy data (seismic traces). The main objective is to find the least number of spikes that, when convolved with the available band-limited seismic wavelet estimate, fit the data within a given tolerance error (misfit). The detection of the spikes’ time lags is a highly nonlinear optimization problem that can be solved using very fast simulated annealing (SA). Amplitudes are easily estimated using linear least squares at each SA iteration. At this stage, quadratic regularization is used to stabilize the solution, to reduce its nonuniqueness, and to provide meaningful reflectivity sequences, thus avoiding the need to constrain the spikes’ time lags and/or amplitudes to force valid solutions. Impedance constraints also can be included at this stage, providing the low frequencies required to recover the acoustic impedance. One advantage of the proposed method over other sparse-spike deconvolution techniques is that the uncertainty of the obtained solutions can be estimated stochastically. Further, errors in the phase of the wavelet estimate are tolerated, for an optimum constant-phase shift is obtained to calibrate the effective wavelet that is present in the data. Results using synthetic data (including simulated data for the Marmousi2 model) and field 3D data show that physically meaningful high-resolution sparse-spike sections can be derived from band-limited noisy data, even when the available wavelet estimate is inaccurate.


2017 ◽  
Author(s):  
Ankit Agrawal ◽  
Snehal V. Sambare ◽  
Leelavati Narlikar ◽  
Rahul Siddharthan

AbstractWe present THiCweed, a new approach to analyzing transcription factor binding data from high-throughput chromatin-immunoprecipitation-sequencing (ChIP-seq) experiments. THiCweed clusters bound regions based on sequence similarity using a divisive hierarchical clustering approach based on sequence similarity within sliding windows, while exploring both strands. ThiCweed is specially geared towards data containing mixtures of motifs, which present a challenge to traditional motif-finders. Our implementation is significantly faster than standard motif-finding programs, able to process 30,000 peaks in 1-2 hours, on a single CPU core of a desktop computer. On synthetic data containing mixtures of motifs it is as accurate or more accurate than all other tested programs.THiCweed performs best with large “window” sizes (≥ 50bp), much longer than typical binding sites (7-15 base pairs). On real data it successfully recovers literature motifs, but also uncovers complex sequence characteristics in flanking DNA, variant motifs, and secondary motifs even when they occur in < 5% of the input, all of which appear biologically relevant. We also find recurring sequence patterns across diverse ChIP-seq data sets, possibly related to chromatin architecture and looping. THiCweed thus goes beyond traditional motif-finding to give new insights into genomic TF binding complexity.


Geophysics ◽  
2009 ◽  
Vol 74 (1) ◽  
pp. E75-E91 ◽  
Author(s):  
Gong Li Wang ◽  
Carlos Torres-Verdín ◽  
Jesús M. Salazar ◽  
Benjamin Voss

In addition to reliability and stability, the efficiency and expediency of inversion methods have long been a strong concern for their routine applications by well-log interpreters. We have developed and successfully validated a new inversion method to estimate 2D parametric spatial distributions of electrical resistivity from array-induction measurements acquired in a vertical well. The central component of the method is an efficient approximation to Fréchet derivatives where both the incident and adjoint fields are precomputed and kept unchanged during inversion. To further enhance the overall efficiency of the inversion, we combined the new approximation with both the improved numerical mode-matching method and domain decomposition. Examples of application with synthetic data sets show that the new methodis computer efficient and capable of retrieving original model re-sistivities even in the presence of noise, performing equally well in both high and low contrasts of formation resistivity. In thin resistive beds, the new inversion method estimates more accurate resistivities than standard commercial deconvolution software. We also considered examples of application with field data sets that confirm the new method can successfully process a large data set that includes 200 beds in approximately [Formula: see text] of CPU time on a desktop computer. In addition to 2D parametric spatial distributions of electrical resistivity, the new inversion method provides a qualitative indicator of the uncertainty of estimated parameters based on the estimator’s covariance matrix. The uncertainty estimator provides a qualitative measure of the nonuniqueness of estimated resistivity parameters when the data misfit lies within the measurement error (noise).


Geophysics ◽  
2021 ◽  
pp. 1-47
Author(s):  
N. A. Vinard ◽  
G. G. Drijkoningen ◽  
D. J. Verschuur

Hydraulic fracturing plays an important role when it comes to the extraction of resources in unconventional reservoirs. The microseismic activity arising during hydraulic fracturing operations needs to be monitored to both improve productivity and to make decisions about mitigation measures. Recently, deep learning methods have been investigated to localize earthquakes given field-data waveforms as input. For optimal results, these methods require large field data sets that cover the entire region of interest. In practice, such data sets are often scarce. To overcome this shortcoming, we propose initially to use a (large) synthetic data set with full waveforms to train a U-Net that reconstructs the source location as a 3D Gaussian distribution. As field data set for our study we use data recorded during hydraulic fracturing operations in Texas. Synthetic waveforms were modelled using a velocity model from the site that was also used for a conventional diffraction-stacking (DS) approach. To increase the U-Nets’ ability to localize seismic events, we augmented the synthetic data with different techniques, including the addition of field noise. We select the best performing U-Net using 22 events that have previously been identified to be confidently localized by DS and apply that U-Net to all 1245 events. We compare our predicted locations to DS and the DS locations refined by a relative location (DSRL) method. The U-Net based locations are better constrained in depth compared to DS and the mean hypocenter difference with respect to DSRL locations is 163 meters. This shows potential for the use of synthetic data to complement or replace field data for training. Furthermore, after training, the method returns the source locations in near real-time given the full waveforms, alleviating the need to pick arrival times.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Sign in / Sign up

Export Citation Format

Share Document