Minimum entropy deconvolution with frequency‐domain constraints

Geophysics ◽  
1994 ◽  
Vol 59 (6) ◽  
pp. 938-945 ◽  
Author(s):  
Mauricio D. Sacchi ◽  
Danilo R. Velis ◽  
Alberto H. Comínguez

A method for reconstructing the reflectivity spectrum using the minimum entropy criterion is presented. The algorithm (FMED) described is compared with the classical minimum entropy deconvolution (MED) as well as with the linear programming (LP) and autoregressive (AR) approaches. The MED is performed by maximizing an entropy norm with respect to the coefficients of a linear operator that deconvolves the seismic trace. By comparison, the approach presented here maximizes the norm with respect to the missing frequencies of the reflectivity series spectrum. This procedure reduces to a nonlinear algorithm that is able to carry out the deconvolution of band‐limited data, avoiding the inherent limitations of linear operators. The proposed method is illustrated under a variety of synthetic examples. Field data are also used to test the algorithm. The results show that the proposed method is an effective way to process band‐limited data. The FMED and the LP arise from similar conceptions. Both methods seek an extremum of a particular norm subjected to frequency constraints. In the LP approach, the linear programming problem is solved using an adaptation of the simplex method, which is a very expensive procedure. The FMED uses only two fast Fourier transforms (FFTs) per iteration; hence, the computational cost of the inversion is reduced.

Geophysics ◽  
1999 ◽  
Vol 64 (4) ◽  
pp. 1108-1115 ◽  
Author(s):  
Warren T. Wood

Estimates of the source wavelet and band‐limited earth reflectivity are obtained simultaneously from an optimization of deconvolution outputs, similar to minimum‐entropy deconvolution (MED). The only inputs required beyond the observed seismogram are wavelet length and an inversion parameter (cooling rate). The objective function to be minimized is a measure of the spikiness of the deconvolved seismogram. I assume that the wavelet whose deconvolution from the data results in the most spike‐like trace is the best wavelet estimate. Because this is a highly nonlinear problem, simulated annealing is used to solve it. The procedure yields excellent results on synthetic data and disparate field data sets, is robust in the presence of noise, and is fast enough to operate in a desktop computer environment.


Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. S51-S59 ◽  
Author(s):  
Dmitrii Merzlikin ◽  
Sergey Fomel

Diffraction imaging aims to emphasize small subsurface objects, such as faults, fracture swarms, and channels. Similar to classical reflection imaging, velocity analysis is crucially important for accurate diffraction imaging. Path-summation migration provides an imaging method that produces an image of the subsurface without picking a velocity model. Previous methods of path-summation imaging involve a discrete summation of the images corresponding to all possible migration velocity distributions within a predefined integration range and thus involve a significant computational cost. We have developed a direct analytical formula for path-summation imaging based on the continuous integration of the images along the velocity dimension, which reduces the cost to that of only two fast Fourier transforms. The analytic approach also enabled automatic migration velocity extraction from diffractions using a double path-summation migration framework. Synthetic and field data examples confirm the efficiency of the proposed techniques.


Author(s):  
Nils Weidmann ◽  
Anthony Anjorin

AbstractIn the field of Model-Driven Engineering, Triple Graph Grammars (TGGs) play an important role as a rule-based means of implementing consistency management. From a declarative specification of a consistency relation, several operations including forward and backward transformations, (concurrent) synchronisation, and consistency checks can be automatically derived. For TGGs to be applicable in realistic application scenarios, expressiveness in terms of supported language features is very important. A TGG tool is schema compliant if it can take domain constraints, such as multiplicity constraints in a meta-model, into account when performing consistency management tasks. To guarantee schema compliance, most TGG tools allow application conditions to be attached as necessary to relevant rules. This strategy is problematic for at least two reasons: First, ensuring compliance to a sufficiently expressive schema for all previously mentioned derived operations is still an open challenge; to the best of our knowledge, all existing TGG tools only support a very restricted subset of application conditions. Second, it is conceptually demanding for the user to indirectly specify domain constraints as application conditions, especially because this has to be completely revisited every time the TGG or domain constraint is changed. While domain constraints can in theory be automatically transformed to obtain the required set of application conditions, this has only been successfully transferred to TGGs for a very limited subset of domain constraints. To address these limitations, this paper proposes a search-based strategy for achieving schema compliance. We show that all correctness and completeness properties, previously proven in a setting without domain constraints, still hold when schema compliance is to be additionally guaranteed. An implementation and experimental evaluation are provided to support our claim of practical applicability.


Author(s):  
Daniel Blatter ◽  
Anandaroop Ray ◽  
Kerry Key

Summary Bayesian inversion of electromagnetic data produces crucial uncertainty information on inferred subsurface resistivity. Due to their high computational cost, however, Bayesian inverse methods have largely been restricted to computationally expedient 1D resistivity models. In this study, we successfully demonstrate, for the first time, a fully 2D, trans-dimensional Bayesian inversion of magnetotelluric data. We render this problem tractable from a computational standpoint by using a stochastic interpolation algorithm known as a Gaussian process to achieve a parsimonious parametrization of the model vis-a-vis the dense parameter grids used in numerical forward modeling codes. The Gaussian process links a trans-dimensional, parallel tempered Markov chain Monte Carlo sampler, which explores the parsimonious model space, to MARE2DEM, an adaptive finite element forward solver. MARE2DEM computes the model response using a dense parameter mesh with resistivity assigned via the Gaussian process model. We demonstrate the new trans-dimensional Gaussian process sampler by inverting both synthetic and field magnetotelluric data for 2D models of electrical resistivity, with the field data example converging within 10 days on 148 cores, a non-negligible but tractable computational cost. For a field data inversion, our algorithm achieves a parameter reduction of over 32x compared to the fixed parameter grid used for the MARE2DEM regularized inversion. Resistivity probability distributions computed from the ensemble of models produced by the inversion yield credible intervals and interquartile plots that quantitatively show the non-linear 2D uncertainty in model structure. This uncertainty could then be propagated to other physical properties that impact resistivity including bulk composition, porosity and pore-fluid content.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. V99-V113 ◽  
Author(s):  
Zhong-Xiao Li ◽  
Zhen-Chun Li

After multiple prediction, adaptive multiple subtraction is essential for the success of multiple removal. The 3D blind separation of convolved mixtures (3D BSCM) method, which is effective in conducting adaptive multiple subtraction, needs to solve an optimization problem containing L1-norm minimization constraints on primaries by the iterative reweighted least-squares (IRLS) algorithm. The 3D BSCM method can better separate primaries and multiples than the 1D/2D BSCM method and the method with energy minimization constraints on primaries. However, the 3D BSCM method has high computational cost because the IRLS algorithm achieves nonquadratic optimization with an LS optimization problem solved in each iteration. In general, it is good to have a faster 3D BSCM method. To improve the adaptability of field data processing, the fast iterative shrinkage thresholding algorithm (FISTA) is introduced into the 3D BSCM method. The proximity operator of FISTA can solve the L1-norm minimization problem efficiently. We demonstrate that our FISTA-based 3D BSCM method achieves similar accuracy of estimating primaries as that of the reference IRLS-based 3D BSCM method. Furthermore, our FISTA-based 3D BSCM method reduces computation time by approximately 60% compared with the reference IRLS-based 3D BSCM method in the synthetic and field data examples.


2021 ◽  
Vol 28 (2) ◽  
pp. 163-182
Author(s):  
José L. Simancas-García ◽  
Kemel George-González

Shannon’s sampling theorem is one of the most important results of modern signal theory. It describes the reconstruction of any band-limited signal from a finite number of its samples. On the other hand, although less well known, there is the discrete sampling theorem, proved by Cooley while he was working on the development of an algorithm to speed up the calculations of the discrete Fourier transform. Cooley showed that a sampled signal can be resampled by selecting a smaller number of samples, which reduces computational cost. Then it is possible to reconstruct the original sampled signal using a reverse process. In principle, the two theorems are not related. However, in this paper we will show that in the context of Non Standard Mathematical Analysis (NSA) and Hyperreal Numerical System R, the two theorems are equivalent. The difference between them becomes a matter of scale. With the scale changes that the hyperreal number system allows, the discrete variables and functions become continuous, and Shannon’s sampling theorem emerges from the discrete sampling theorem.


Author(s):  
Tobias Leibner ◽  
Mario Ohlberger

In this contribution we derive and analyze a new numerical method for kinetic equations based on a variable transformation of the moment approximation. Classical minimum-entropy moment closures are a class of reduced models for kinetic equations that conserve many of the fundamental physical properties of solutions. However, their practical use is limited by their high computational cost, as an optimization problem has to be solved for every cell in the space-time grid. In addition, implementation of numerical solvers for these models is hampered by the fact that the optimization problems are only well-defined if the moment vectors stay within the realizable set. For the same reason, further reducing these models by, e.g., reduced-basis methods is not a simple task. Our new method overcomes these disadvantages of classical approaches. The transformation is performed on the semi-discretized level which makes them applicable to a wide range of kinetic schemes and replaces the nonlinear optimization problems by inversion of the positive-definite Hessian matrix. As a result, the new scheme gets rid of the realizability-related problems. Moreover, a discrete entropy law can be enforced by modifying the time stepping scheme. Our numerical experiments demonstrate that our new method is often several times faster than the standard optimization-based scheme.


Sign in / Sign up

Export Citation Format

Share Document