Warping least-squares inversion for 4D velocity change: Gulf of Mexico case study

2019 ◽  
Vol 7 (2) ◽  
pp. SB23-SB31
Author(s):  
Chang Li ◽  
Mark Meadows ◽  
Todd Dygert

We have developed a new trace-based, warping least-squares inversion method to quantify 4D velocity changes. There are two steps to solve for these velocity changes: (1) dynamic warping with phase constraints to align the baseline and monitor traces and (2) least-squares inversion for 4D velocity changes incorporating the time shifts and 4D amplitude differences (computed after trace alignment by warping). We have demonstrated this new inversion workflow using simple synthetic layered models. For the noise-free case, phase-constrained warping is superior to standard, amplitude-based warping by improving trace alignment, resulting in more accurate inverted velocity changes (less than 1% error). For synthetic data with 6% rms noise, inverted velocity changes are reasonably accurate (less than 10% error). Additional inversion tests with migrated finite-difference data shot over a realistic anticline model result in less than 10% error. The inverted velocity changes on a 4D field data set from the Gulf of Mexico are more interpretable and consistent with the dynamic reservoir model than those estimated from the conventional time-strain method.

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. S87-S100 ◽  
Author(s):  
Hao Hu ◽  
Yike Liu ◽  
Yingcai Zheng ◽  
Xuejian Liu ◽  
Huiyi Lu

Least-squares migration (LSM) can be effective to mitigate the limitation of finite-seismic acquisition, balance the subsurface illumination, and improve the spatial resolution of the image, but it requires iterations of migration and demigration to obtain the desired subsurface reflectivity model. The computational efficiency and accuracy of migration and demigration operators are crucial for applying the algorithm. We have developed a test of the feasibility of using the Gaussian beam as the wavefield extrapolating operator for the LSM, denoted as least-squares Gaussian beam migration. Our method combines the advantages of the LSM and the efficiency of the Gaussian beam propagator. Our numerical evaluations, including two synthetic data sets and one marine field data set, illustrate that the proposed approach could be used to obtain amplitude-balanced images and to broaden the bandwidth of the migrated images in particular for the low-wavenumber components.


2020 ◽  
Vol 633 ◽  
pp. A46
Author(s):  
L. Siltala ◽  
M. Granvik

Context. The bulk density of an asteroid informs us about its interior structure and composition. To constrain the bulk density, one needs an estimated mass of the asteroid. The mass is estimated by analyzing an asteroid’s gravitational interaction with another object, such as another asteroid during a close encounter. An estimate for the mass has typically been obtained with linearized least-squares methods, despite the fact that this family of methods is not able to properly describe non-Gaussian parameter distributions. In addition, the uncertainties reported for asteroid masses in the literature are sometimes inconsistent with each other and are suspected to be unrealistically low. Aims. We aim to present a Markov-chain Monte Carlo (MCMC) algorithm for the asteroid mass estimation problem based on asteroid-asteroid close encounters. We verify that our algorithm works correctly by applying it to synthetic data sets. We use astrometry available through the Minor Planet Center to estimate masses for a select few example cases and compare our results with results reported in the literature. Methods. Our mass-estimation method is based on the robust adaptive Metropolis algorithm that has been implemented into the OpenOrb asteroid orbit computation software. Our method has the built-in capability to analyze multiple perturbing asteroids and test asteroids simultaneously. Results. We find that our mass estimates for the synthetic data sets are fully consistent with the ground truth. The nominal masses for real example cases typically agree with the literature but tend to have greater uncertainties than what is reported in recent literature. Possible reasons for this include different astrometric data sets and weights, different test asteroids, different force models or different algorithms. For (16) Psyche, the target of NASA’s Psyche mission, our maximum likelihood mass is approximately 55% of what is reported in the literature. Such a low mass would imply that the bulk density is significantly lower than previously expected and thus disagrees with the theory of (16) Psyche being the metallic core of a protoplanet. We do, however, note that masses reported in recent literature remain within our 3-sigma limits. Results. The new MCMC mass-estimation algorithm performs as expected, but a rigorous comparison with results from a least-squares algorithm with the exact same data set remains to be done. The matters of uncertainties in comparison with other algorithms and correlations of observations also warrant further investigation.


Geophysics ◽  
2003 ◽  
Vol 68 (3) ◽  
pp. 996-1007 ◽  
Author(s):  
Fabio Caratori Tontini ◽  
Osvaldo Faggioni ◽  
Nicolò Beverini ◽  
Cosmo Carmisciano

We describe an inversion method for 3D geomagnetic data based on approximation of the source distribution by means of positive constrained Gaussian functions. In this way, smoothness and positivity are automatically imposed on the source without any subjective input from the user apart from selecting the number of functions to use. The algorithm has been tested with synthetic data in order to resolve sources at very different depths, using data from one measurement plane only. The forward modeling is based on prismatic cell parameterization, but the algebraic nonuniqueness is reduced because a relationship among the cells, expressed by the Gaussian envelope, is assumed to describe the spatial variation of the source distribution. We assume that there is no remanent magnetization and that the magnetic data are produced by induced magnetization only, neglecting any demagnetization effects. The algorithm proceeds by minimization of a χ2 misfit function between real and predicted data using a nonlinear Levenberg‐Marquardt iteration scheme, easily implemented on a desktop PC, without any additional regularization. We demonstrate the robustness and utility of the method using synthetic data corrupted by pseudorandom generated noise and a real field data set.


Geophysics ◽  
1985 ◽  
Vol 50 (11) ◽  
pp. 1701-1720 ◽  
Author(s):  
Glyn M. Jones ◽  
D. B. Jovanovich

A new technique is presented for the inversion of head‐wave traveltimes to infer near‐surface structure. Traveltimes computed along intersecting pairs of refracted rays are used to reconstruct the shape of the first refracting horizon beneath the surface and variations in refractor velocity along this boundary. The information derived can be used as the basis for further processing, such as the calculation of near‐surface static delays. One advantage of the method is that the shape of the refractor is determined independently of the refractor velocity. With multifold coverage, rapid lateral changes in refractor geometry or velocity can be mapped. Two examples of the inversion technique are presented: one uses a synthetic data set; the other is drawn from field data shot over a deep graben filled with sediment. The results obtained using the synthetic data validate the method and support the conclusions of an error analysis, in which errors in the refractor velocity determined using receivers to the left and right of the shots are of opposite sign. The true refractor velocity therefore falls between the two sets of estimates. The refraction image obtained by inversion of the set of field data is in good agreement with a constant‐velocity reflection stack and illustrates that the ray inversion method can handle large lateral changes in refractor velocity or relief.


Geophysics ◽  
1999 ◽  
Vol 64 (2) ◽  
pp. 326-336 ◽  
Author(s):  
Subhashis Mallick

In this paper, a prestack inversion method using a genetic algorithm (GA) is presented, and issues relating to the implementation of prestack GA inversion in practice are discussed. GA is a Monte‐Carlo type inversion, using a natural analogy to the biological evolution process. When GA is cast into a Bayesian framework, a priori information of the model parameters and the physics of the forward problem are used to compute synthetic data. These synthetic data can then be matched with observations to obtain approximate estimates of the marginal a posteriori probability density (PPD) functions in the model space. Plots of these PPD functions allow an interpreter to choose models which best describe the specific geologic setting and lead to an accurate prediction of seismic lithology. Poststack inversion and prestack GA inversion were applied to a Woodbine gas sand data set from East Texas. A comparison of prestack inversion with poststack inversion demonstrates that prestack inversion shows detailed stratigraphic features of the subsurface which are not visible on the poststack inversion.


2020 ◽  
Vol 10 (14) ◽  
pp. 4798
Author(s):  
Naín Vera ◽  
Carlos Couder-Castañeda ◽  
Jorge Hernández ◽  
Alfredo Trujillo-Alcántara ◽  
Mauricio Orozco-del-Castillo ◽  
...  

Potential-field-data imaging of complex geological features in deepwater salt-tectonic regions in the Gulf of Mexico remains an open active research field. There is still a lack of resolution in seismic imaging methods below and in the surroundings of allochthonous salt bodies. In this work, we present a novel three-dimensional potential-field-data simultaneous inversion method for imaging of salt features. This new approach incorporates a growth algorithm for source estimation, which progressively recovers geological structures by exploring a constrained parameter space; restrictions are posed from a priori geological knowledge of the study area. The algorithm is tested with synthetic data corresponding to a real complex salt-tectonic geological setting commonly found in exploration areas of deepwater Gulf of Mexico. Due to the huge amount of data involved in three-dimensional inversion of potential field data, the use of parallel computing techniques becomes mandatory. In this sense, to alleviate computational burden, an easy to implement parallelization strategy for the inversion scheme through OpenMP directives is presented. The methodology was applied to invert and integrate gravity, magnetic and full tensor gradient data of the study area.


Sign in / Sign up

Export Citation Format

Share Document