Least‐squares DMO and migration

Geophysics ◽  
2000 ◽  
Vol 65 (5) ◽  
pp. 1364-1371 ◽  
Author(s):  
Shuki Ronen ◽  
Christopher L. Liner

Conventional processing, such as Kirchhoff dip moveout (DMO) and prestack full migration, are based on independent imaging of subsets of the data before stacking or amplitude variation with offset (AVO) analysis. Least‐squares DMO (LSDMO) and least‐squares migration (LSMig) are a family of developing processing methods which are based on inversion of reverse DMO and demigration operators. LSDMO and LSMig find the earth model that best fits the data and a priori assumptions which can be imposed as constraints. Such inversions are more computer intensive, but have significant advantages compared to conventional processing when applied to irregularly sampled data. Various conventional processes are approximations of the inversions in LSDMO and LSMig. Often, processing is equivalent to using the transpose of a matrix which LSDMO/LSMig inverts. Such transpose processing is accurate when the data sampling is adequate. In practice, costly survey design, real‐time coverage quality control, in‐fill acquisition, redundancy editing, and prestack interpolation, are used to create a survey geometry such that the transpose is a good approximation of the inverse. Normalized DMO and migration are approximately equivalent to following the application of the above transpose processing by a diagonal correction. However, in most cases, the required correction is not actually diagonal. In such cases LSDMO and LSMig can produce earth models with higher resolution and higher fidelity than normalized DMO and migration. The promise of LSMig and LSDMO is reduced acquisition cost, improved resolution, and reduced acquisition footprint. The computational cost, and more importantly turn‐around time, is a major factor in the commercialization of these methods. With parallel computing, these methods are now becoming practical.

Geophysics ◽  
2000 ◽  
Vol 65 (4) ◽  
pp. 1195-1209 ◽  
Author(s):  
Bertrand Duquet ◽  
Kurt J. Marfurt ◽  
Joe A. Dellinger

Because of its computational efficiency, prestack Kirchhoff depth migration is currently one of the most popular algorithms used in 2-D and 3-D subsurface depth imaging. Nevertheless, Kirchhoff algorithms in their typical implementation produce less than ideal results in complex terranes where multipathing from the surface to a given image point may occur, and beneath fast carbonates, salt, or volcanics through which ray‐theoretical energy cannot penetrate to illuminate underlying slower‐velocity sediments. To evaluate the likely effectiveness of a proposed seismic‐acquisition program, we could perform a forward‐modeling study, but this can be expensive. We show how Kirchhoff modeling can be defined as the mathematical transpose of Kirchhoff migration. The resulting Kirchhoff modeling algorithm has the same low computational cost as Kirchhoff migration and, unlike expensive full acoustic or elastic wave‐equation methods, only models the events that Kirchhoff migration can image. Kirchhoff modeling is also a necessary element of constrained least‐squares Kirchhoff migration. We show how including a simple a priori constraint during the inversion (that adjacent common‐offset images should be similar) can greatly improve the resulting image by partially compensating for irregularities in surface sampling (including missing data), as well as for irregularities in ray coverage due to strong lateral variations in velocity and our failure to account for multipathing. By allowing unstacked common‐offset gathers to become interpretable, the additional cost of constrained least‐squares migration may be justifiable for velocity analysis and amplitude‐variation‐with‐offset studies. One useful by‐product of least‐squares migration is an image of the subsurface illumination for each offset. If the data are sufficiently well sampled (so that including the constraint term is not necessary), the illumination can instead be calculated directly and used to balance the result of conventional migration, obtaining most of the advantages of least‐squares migration for only about twice the cost of conventional migration.


2020 ◽  
Vol 37 (3) ◽  
pp. 449-465 ◽  
Author(s):  
Jeffrey J. Early ◽  
Adam M. Sykulski

AbstractA comprehensive method is provided for smoothing noisy, irregularly sampled data with non-Gaussian noise using smoothing splines. We demonstrate how the spline order and tension parameter can be chosen a priori from physical reasoning. We also show how to allow for non-Gaussian noise and outliers that are typical in global positioning system (GPS) signals. We demonstrate the effectiveness of our methods on GPS trajectory data obtained from oceanographic floating instruments known as drifters.


2020 ◽  
Vol 54 (2) ◽  
pp. 649-677 ◽  
Author(s):  
Abdul-Lateef Haji-Ali ◽  
Fabio Nobile ◽  
Raúl Tempone ◽  
Sören Wolfers

Weighted least squares polynomial approximation uses random samples to determine projections of functions onto spaces of polynomials. It has been shown that, using an optimal distribution of sample locations, the number of samples required to achieve quasi-optimal approximation in a given polynomial subspace scales, up to a logarithmic factor, linearly in the dimension of this space. However, in many applications, the computation of samples includes a numerical discretization error. Thus, obtaining polynomial approximations with a single level method can become prohibitively expensive, as it requires a sufficiently large number of samples, each computed with a sufficiently small discretization error. As a solution to this problem, we propose a multilevel method that utilizes samples computed with different accuracies and is able to match the accuracy of single-level approximations with reduced computational cost. We derive complexity bounds under certain assumptions about polynomial approximability and sample work. Furthermore, we propose an adaptive algorithm for situations where such assumptions cannot be verified a priori. Finally, we provide an efficient algorithm for the sampling from optimal distributions and an analysis of computationally favorable alternative distributions. Numerical experiments underscore the practical applicability of our method.


Geophysics ◽  
2003 ◽  
Vol 68 (5) ◽  
pp. 1633-1638 ◽  
Author(s):  
Yanghua Wang

The spectrum of a discrete Fourier transform (DFT) is estimated by linear inversion, and used to produce desirable seismic traces with regular spatial sampling from an irregularly sampled data set. The essence of such a wavefield reconstruction method is to solve the DFT inverse problem with a particular constraint which imposes a sparseness criterion on the least‐squares solution. A working definition for the sparseness constraint is presented to improve the stability and efficiency. Then a sparseness measurement is used to measure the relative sparseness of the two DFT spectra obtained from inversion with or without sparseness constraint. It is a pragmatic indicator about the magnitude of sparseness needed for wavefield reconstruction. For seismic trace regularization, an antialiasing condition must be fulfilled for the regularizing trace interval, whereas optimal trace coordinates in the output can be obtained by minimizing the distances between the newly generated traces and the original traces in the input. Application to real seismic data reveals the effectiveness of the technique and the significance of the sparseness constraint in the least‐squares solution.


Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. R15-R27 ◽  
Author(s):  
Hassan Khaniani ◽  
John C. Bancroft ◽  
Eric von Lunen

We have studied elastic wave scattering and iterative inversion in the context of the Kirchhoff approximation. The approach is more consistent with the weak-contrast reflectivity functions of Zoeppritz equations as compared to the Born approximation. To reduce the computational cost associated with inversion, we demonstrated the use of amplitude-variation-with-offset (AVO) analysis, prestack time migrations (PSTMs), and the corresponding forward modeling in an iterative scheme. Forward modeling and migration/inversion operators are based on the double-square-root (DSR) equations of PSTM and linearized reflectivity functions. All operators involved in the inversion, including the background model for DSR and AVO, are defined in P-to-P traveltime and are updated at each iteration. Our method is practical for real data applications because all operators of the inversion are known to be applicable for standard methods. We have evaluated the inversion on synthetic and real data using the waveform characteristics of P-to-P and P-to-S data.


2013 ◽  
Vol 3 (1) ◽  
pp. 368-372
Author(s):  
A. Zahedi ◽  
M. H. Kahaei

In this paper, a new method for frequency estimation of irregularly sampled data is proposed. In comparison with the previous sparsity-based methods where the sparsity constraint is applied to a least-squares fitting problem, the proposed method is based on a sparsity constrained weighted least-squares problem. The resulting problem is solved in an iterative manner, allowing the usage of the solution obtained at each iteration to determine the weights of the least-squares fitting term at the next iteration. Such an appropriate weighting of the least-squares fitting term enhances the performance of the proposed method. Simulation results verify that the proposed method can detect the spectral peaks using a very short data record. Compared to the previous one, the proposed method is less probable to miss the actual spectral peaks and exhibit spurious peaks.


Geophysics ◽  
2018 ◽  
Vol 83 (3) ◽  
pp. V157-V170 ◽  
Author(s):  
Ebrahim Ghaderpour ◽  
Wenyuan Liao ◽  
Michael P. Lamoureux

Spatial transformation of an irregularly sampled data series to a regularly sampled data series is a challenging problem in many areas such as seismology. The discrete Fourier analysis is limited to regularly sampled data series. On the other hand, the least-squares spectral analysis (LSSA) can analyze an irregularly sampled data series. Although the LSSA method takes into account the correlation among the sinusoidal basis functions of irregularly spaced series, it still suffers from the problem of spectral leakage: Energy leaks from one spectral peak into another. We have developed an iterative method called antileakage LSSA to attenuate the spectral leakage and consequently regularize irregular data series. In this method, we first search for a spectral peak with the highest energy, and then we remove (suppress) it from the original data series. In the next step, we search for a new peak with the highest energy in the residual data series and remove the new and the old components simultaneously from the original data series using a least-squares method. We repeat this procedure until all significant spectral peaks are estimated and removed simultaneously from the original data series. In addition, we address another problem, which is random noise attenuation in the data series, by applying a certain confidence level for significant peaks in the spectrum. We determine the robustness of our method on irregularly sampled synthetic and real data sets, and we compare the results with the antileakage Fourier transform and arbitrary sampled Fourier transform.


Geophysics ◽  
2008 ◽  
Vol 73 (2) ◽  
pp. T1-T10 ◽  
Author(s):  
Gerrit Toxopeus ◽  
Jan Thorbecke ◽  
Kees Wapenaar ◽  
Steen Petersen ◽  
Evert Slob ◽  
...  

The simulation of migrated and inverted data is hampered by the high computational cost of generating 3D synthetic data, followed by processes of migration and inversion. For example, simulating the migrated seismic signature of subtle stratigraphic traps demands the expensive exercise of 3D forward modeling, followed by 3D migration of the synthetic seismograms. This computational cost can be overcome using a strategy for simulating migrated and inverted data by filtering a geologic model with 3D spatial-resolution and angle filters, respectively. A key property of the approach is this: The geologic model that describes a target zone is decoupled from the macrovelocity model used to compute the filters. The process enables a target-orientedapproach, by which a geologically detailed earth model describing a reservoir is adjusted without having to recalculate the filters. Because a spatial-resolution filter combines the results of the modeling and migration operators, the simulated images can be compared directly to a real migration image. We decompose the spatial-resolution filter into two parts and show that applying one of those parts produces output directly comparable to 1D inverted real data. Two-dimensional synthetic examples that include seismic uncertainties demonstrate the usefulness of the approach. Results from a real data example show that horizontal smearing, which is not simulated by the 1D convolution model result, is essential to understand the seismic expression of the deformation related to sulfate dissolution and karst collapse.


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


Sign in / Sign up

Export Citation Format

Share Document