Some practical aspects of prestack waveform inversion using a genetic algorithm: An example from the east Texas Woodbine gas sand

Geophysics ◽  
1999 ◽  
Vol 64 (2) ◽  
pp. 326-336 ◽  
Author(s):  
Subhashis Mallick

In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte‐Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of these PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.

Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB225-WB234 ◽  
Author(s):  
Juefu Wang ◽  
Mark Ng ◽  
Mike Perz

We propose a greedy inversion method for a spatially localized, high-resolution Radon transform. The kernel of the method is based on a conventional iterative algorithm, conjugate gradient (CG), but is utilized adaptively in amplitude-prioritized local model spaces. The adaptive inversion introduces a coherence-oriented mechanism to enhance focusing of significant model parameters, and hence increases the model resolution and convergence rate. We adopt the idea in a time-space domain local linear Radon transform for data interpolation. We find that the local Radon transform involves iteratively applying spatially localized forward and adjoint Radon operators to fit the input data. Optimal local Radon panels can be found via a subspace algorithm which promotes sparsity in the model, and the missing data can be predicted using the resulting local Radon panels. The subspacing strategy greatly reduces the cost of computing local Radon coefficients, thereby reducing the total cost for inversion. The method can handle irregular and regular geometries and significant spatial aliasing. We compare the performance of our method using three simple synthetic data sets with a popular interpolation method known as minimum weighted norm Fourier interpolation, and show the advantage of the new algorithm in interpolating spatially aliased data. We also test the algorithm on the 2D synthetic data and a field data set. Both tests show that the algorithm is a robust antialiasing tool, although it cannot completely recover missing strongly curved events.


Author(s):  
Leila Taghizadeh ◽  
Ahmad Karimi ◽  
Clemens Heitzinger

AbstractThe main goal of this paper is to develop the forward and inverse modeling of the Coronavirus (COVID-19) pandemic using novel computational methodologies in order to accurately estimate and predict the pandemic. This leads to governmental decisions support in implementing effective protective measures and prevention of new outbreaks. To this end, we use the logistic equation and the SIR system of ordinary differential equations to model the spread of the COVID-19 pandemic. For the inverse modeling, we propose Bayesian inversion techniques, which are robust and reliable approaches, in order to estimate the unknown parameters of the epidemiological models. We use an adaptive Markov-chain Monte-Carlo (MCMC) algorithm for the estimation of a posteriori probability distribution and confidence intervals for the unknown model parameters as well as for the reproduction number. Furthermore, we present a fatality analysis for COVID-19 in Austria, which is also of importance for governmental protective decision making. We perform our analyses on the publicly available data for Austria to estimate the main epidemiological model parameters and to study the effectiveness of the protective measures by the Austrian government. The estimated parameters and the analysis of fatalities provide useful information for decision makers and makes it possible to perform more realistic forecasts of future outbreaks.


2019 ◽  
Vol 7 (2) ◽  
pp. SB23-SB31
Author(s):  
Chang Li ◽  
Mark Meadows ◽  
Todd Dygert

We have developed a new trace-based, warping least-squares inversion method to quantify 4D velocity changes. There are two steps to solve for these velocity changes: (1) dynamic warping with phase constraints to align the baseline and monitor traces and (2) least-squares inversion for 4D velocity changes incorporating the time shifts and 4D amplitude differences (computed after trace alignment by warping). We have demonstrated this new inversion workflow using simple synthetic layered models. For the noise-free case, phase-constrained warping is superior to standard, amplitude-based warping by improving trace alignment, resulting in more accurate inverted velocity changes (less than 1% error). For synthetic data with 6% rms noise, inverted velocity changes are reasonably accurate (less than 10% error). Additional inversion tests with migrated finite-difference data shot over a realistic anticline model result in less than 10% error. The inverted velocity changes on a 4D field data set from the Gulf of Mexico are more interpretable and consistent with the dynamic reservoir model than those estimated from the conventional time-strain method.


Geophysics ◽  
2002 ◽  
Vol 67 (6) ◽  
pp. 1753-1768 ◽  
Author(s):  
Yuji Mitsuhata ◽  
Toshihiro Uchida ◽  
Hiroshi Amano

Interpretation of controlled‐source electromagnetic (CSEM) data is usually based on 1‐D inversions, whereas data of direct current (dc) resistivity and magnetotelluric (MT) measurements are commonly interpreted by 2‐D inversions. We have developed an algorithm to invert frequency‐Domain vertical magnetic data generated by a grounded‐wire source for a 2‐D model of the earth—a so‐called 2.5‐D inversion. To stabilize the inversion, we adopt a smoothness constraint for the model parameters and adjust the regularization parameter objectively using a statistical criterion. A test using synthetic data from a realistic model reveals the insufficiency of only one source to recover an acceptable result. In contrast, the joint use of data generated by a left‐side source and a right‐side source dramatically improves the inversion result. We applied our inversion algorithm to a field data set, which was transformed from long‐offset transient electromagnetic (LOTEM) data acquired in a Japanese oil and gas field. As demonstrated by the synthetic data set, the inversion of the joint data set automatically converged and provided a better resultant model than that of the data generated by each source. In addition, our 2.5‐D inversion accounted for the reversals in the LOTEM measurements, which is impossible using 1‐D inversions. The shallow parts (above about 1 km depth) of the final model obtained by our 2.5‐D inversion agree well with those of a 2‐D inversion of MT data.


2019 ◽  
Vol 68 (1) ◽  
pp. 29-46 ◽  
Author(s):  
Elisabeth Dietze ◽  
Michael Dietze

Abstract. The analysis of grain-size distributions has a long tradition in Quaternary Science and disciplines studying Earth surface and subsurface deposits. The decomposition of multi-modal grain-size distributions into inherent subpopulations, commonly termed end-member modelling analysis (EMMA), is increasingly recognised as a tool to infer the underlying sediment sources, transport and (post-)depositional processes. Most of the existing deterministic EMMA approaches are only able to deliver one out of many possible solutions, thereby shortcutting uncertainty in model parameters. Here, we provide user-friendly computational protocols that support deterministic as well as robust (i.e. explicitly accounting for incomplete knowledge about input parameters in a probabilistic approach) EMMA, in the free and open software framework of R. In addition, and going beyond previous validation tests, we compare the performance of available grain-size EMMA algorithms using four real-world sediment types, covering a wide range of grain-size distribution shapes (alluvial fan, dune, loess and floodplain deposits). These were randomly mixed in the lab to produce a synthetic data set. Across all algorithms, the original data set was modelled with mean R2 values of 0.868 to 0.995 and mean absolute deviation (MAD) values of 0.06 % vol to 0.34 % vol. The original grain-size distribution shapes were modelled as end-member loadings with mean R2 values of 0.89 to 0.99 and MAD of 0.04 % vol to 0.17 % vol. End-member scores reproduced the original mixing ratios in the synthetic data set with mean R2 values of 0.68 to 0.93 and MAD of 0.1 % vol to 1.6 % vol. Depending on the validation criteria, all models provided reliable estimates of the input data, and each of the models exhibits individual strengths and weaknesses. Only robust EMMA allowed uncertainties of the end-members to be objectively estimated and expert knowledge to be included in the end-member definition. Yet, end-member interpretation should carefully consider the geological and sedimentological meaningfulness in terms of sediment sources, transport and deposition as well as post-depositional alteration of grain sizes. EMMA might also be powerful in other geoscientific contexts where the goal is to unmix sources and processes from compositional data sets.


Geophysics ◽  
2006 ◽  
Vol 71 (3) ◽  
pp. J41-J50 ◽  
Author(s):  
Tim van Zon ◽  
Kabir Roy-Chowdhury

Structural inversion of gravity data — deriving robust images of the subsurface by delineating lithotype boundaries using density anomalies — is an important goal in a range of exploration settings (e.g., ore bodies, salt flanks). Application of conventional inversion techniques in such cases, using [Formula: see text]-norms and regularization, produces smooth results and is thus suboptimal. We investigate an [Formula: see text]-norm-based approach which yields structural images without the need for explicit regularization. The density distribution of the subsurface is modeled with a uniform grid of cells. The density of each cell is inverted by minimizing the [Formula: see text]-norm of the data misfit using linear programming (LP) while satisfying a priori density constraints. The estimate of the noise level in a given data set is used to qualitatively determine an appropriate parameterization. The 2.5D and 3D synthetic tests adequately reconstruct the structure of the test models. The quality of the inversion depends upon a good prior estimation of the minimum depth of the anomalous body. A comparison of our results with one using truncated singular value decomposition (TSVD) on a noisy synthetic data set favors the LP-based method. There are two advantages in using LP for structural inversion of gravity data. First, it offers a natural way to incorporate a priori information regarding the model parameters. Second, it produces subsurface images with sharp boundaries (structure).


Geophysics ◽  
2003 ◽  
Vol 68 (3) ◽  
pp. 996-1007 ◽  
Author(s):  
Fabio Caratori Tontini ◽  
Osvaldo Faggioni ◽  
Nicolò Beverini ◽  
Cosmo Carmisciano

We describe an inversion method for 3D geomagnetic data based on approximation of the source distribution by means of positive constrained Gaussian functions. In this way, smoothness and positivity are automatically imposed on the source without any subjective input from the user apart from selecting the number of functions to use. The algorithm has been tested with synthetic data in order to resolve sources at very different depths, using data from one measurement plane only. The forward modeling is based on prismatic cell parameterization, but the algebraic nonuniqueness is reduced because a relationship among the cells, expressed by the Gaussian envelope, is assumed to describe the spatial variation of the source distribution. We assume that there is no remanent magnetization and that the magnetic data are produced by induced magnetization only, neglecting any demagnetization effects. The algorithm proceeds by minimization of a χ2 misfit function between real and predicted data using a nonlinear Levenberg‐Marquardt iteration scheme, easily implemented on a desktop PC, without any additional regularization. We demonstrate the robustness and utility of the method using synthetic data corrupted by pseudorandom generated noise and a real field data set.


Geophysics ◽  
1985 ◽  
Vol 50 (11) ◽  
pp. 1701-1720 ◽  
Author(s):  
Glyn M. Jones ◽  
D. B. Jovanovich

A new technique is presented for the inversion of head‐wave traveltimes to infer near‐surface structure. Traveltimes computed along intersecting pairs of refracted rays are used to reconstruct the shape of the first refracting horizon beneath the surface and variations in refractor velocity along this boundary. The information derived can be used as the basis for further processing, such as the calculation of near‐surface static delays. One advantage of the method is that the shape of the refractor is determined independently of the refractor velocity. With multifold coverage, rapid lateral changes in refractor geometry or velocity can be mapped. Two examples of the inversion technique are presented: one uses a synthetic data set; the other is drawn from field data shot over a deep graben filled with sediment. The results obtained using the synthetic data validate the method and support the conclusions of an error analysis, in which errors in the refractor velocity determined using receivers to the left and right of the shots are of opposite sign. The true refractor velocity therefore falls between the two sets of estimates. The refraction image obtained by inversion of the set of field data is in good agreement with a constant‐velocity reflection stack and illustrates that the ray inversion method can handle large lateral changes in refractor velocity or relief.


2018 ◽  
Vol 48 (2) ◽  
pp. 161-178 ◽  
Author(s):  
Mohammed Tlas ◽  
Jamal Asfahani

Abstract An easy and very simple method to interpret residual gravity anomalies due to simple geometrical shaped models such as a semi-infinite vertical rod, an infinite horizontal rod, and a sphere has been proposed in this paper. The proposed method is mainly based on the quadratic curve regression to best-estimate the model parameters, e.g. the depth from the surface to the center of the buried structure (sphere or infinite horizontal rod) or the depth from the surface to the top of the buried object (semi-infinite vertical rod), the amplitude coefficient, and the horizontal location from residual gravity anomaly profile. The proposed method has been firstly tested on synthetic data set corrupted and contaminated by a Gaussian white noise level to demonstrate the capability and the reliability of the method. The results acquired show that the estimated parameters values derived by this proposed method are very close to the assumed true parameters values. Next, the validity of the presented method is demonstrated on synthetic data set and 3 real data sets from Cuba, Sweden and Iran. A comparable and acceptable agreement is indicated between the results derived by this method and those from the real field data information.


2009 ◽  
Vol 6 (2) ◽  
pp. 2367-2413
Author(s):  
T. Bulatewicz ◽  
W. Jin ◽  
S. Staggenborg ◽  
S. Lauwo ◽  
M. Miller ◽  
...  

Abstract. Near-term consumption of groundwater for irrigated agriculture in the High Plains Aquifer supports a dynamic bio-socio-economic system, all parts of which will be impacted by a future transition to sustainable usage that matches natural recharge rates. Plants are the foundation of this system and so generic plant models suitable for coupling to representations of other component processes (hydrologic, economic, etc.) are key elements of needed stakeholder decision support systems. This study explores utilization of the Environmental Policy Integrated Climate (EPIC) model to serve in this role. Calibration required many facilities of a fully deployed decision support system: geo-referenced databases of crop (corn, sorghum, alfalfa, and soybean), soil, weather, and water-use data (4931 well-years), interfacing heterogeneous software components, and massively parallel processing (3.8×109 model runs). Bootstrap probability distributions for ten model parameters were obtained for each crop by entropy maximization via the genetic algorithm. The relative errors in yield and water estimates based on the parameters are analyzed by crop, the level of aggregation (county- or well-level), and the degree of independence between the data set used for estimation and the data being predicted.


Sign in / Sign up

Export Citation Format

Share Document