Target-oriented wavefield tomography using synthesized Born data

Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. WB191-WB207 ◽  
Author(s):  
Yaxun Tang ◽  
Biondo Biondi

We present a new strategy for efficient wave-equation migration-velocity analysis in complex geological settings. The proposed strategy has two main steps: simulating a new data set using an initial unfocused image and performing wavefield-based tomography using this data set. We demonstrated that the new data set can be synthesized by using generalized Born wavefield modeling for a specific target region where velocities are inaccurate. We also showed that the new data set can be much smaller than the original one because of the target-oriented modeling strategy, but it contains necessary velocity information for successful velocity analysis. These interesting features make this new data set suitable for target-oriented, fast and interactive velocity model-building. We demonstrate the performance of our method on both a synthetic data set and a field data set acquired from the Gulf of Mexico, where we update the subsalt velocity in a target-oriented fashion and obtain a subsalt image with improved continuities, signal-to-noise ratio and flattened angle-domain common-image gathers.

Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. U1-U8 ◽  
Author(s):  
Bingbing Sun ◽  
Tariq Alkhalifah

Macro-velocity model building is important for subsequent prestack depth migration and full-waveform inversion. Wave-equation migration velocity analysis uses the band-limited waveform to invert for velocity. Normally, inversion would be implemented by focusing the subsurface offset common-image gathers. We reexamine this concept with a different perspective: In the subsurface offset domain, using extended Born modeling, the recorded data can be considered as invariant with respect to the perturbation of the position of the virtual sources and velocity at the same time. A linear system connecting the perturbation of the position of those virtual sources and velocity is derived and solved subsequently by the conjugate gradient method. In theory, the perturbation of the position of the virtual sources is given by the Rytov approximation. Thus, compared with the Born approximation, it relaxes the dependency on amplitude and makes the proposed method more applicable for real data. We determined the effectiveness of the approach by applying the proposed method on isotropic and anisotropic vertical transverse isotropic synthetic data. A real data set example verifies the robustness of the proposed method.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. U109-U119
Author(s):  
Pengyu Yuan ◽  
Shirui Wang ◽  
Wenyi Hu ◽  
Xuqing Wu ◽  
Jiefu Chen ◽  
...  

A deep-learning-based workflow is proposed in this paper to solve the first-arrival picking problem for near-surface velocity model building. Traditional methods, such as the short-term average/long-term average method, perform poorly when the signal-to-noise ratio is low or near-surface geologic structures are complex. This challenging task is formulated as a segmentation problem accompanied by a novel postprocessing approach to identify pickings along the segmentation boundary. The workflow includes three parts: a deep U-net for segmentation, a recurrent neural network (RNN) for picking, and a weight adaptation approach to be generalized for new data sets. In particular, we have evaluated the importance of selecting a proper loss function for training the network. Instead of taking an end-to-end approach to solve the picking problem, we emphasize the performance gain obtained by using an RNN to optimize the picking. Finally, we adopt a simple transfer learning scheme and test its robustness via a weight adaptation approach to maintain the picking performance on new data sets. Our tests on synthetic data sets reveal the advantage of our workflow compared with existing deep-learning methods that focus only on segmentation performance. Our tests on field data sets illustrate that a good postprocessing picking step is essential for correcting the segmentation errors and that the overall workflow is efficient in minimizing human interventions for the first-arrival picking task.


2021 ◽  
Vol 40 (6) ◽  
pp. 460-463
Author(s):  
Lionel J. Woog ◽  
Anthony Vassiliou ◽  
Rodney Stromberg

In seismic data processing, static corrections for near-surface velocities are derived from first-break picking. The quality of the static corrections is paramount to developing an accurate shallow velocity model, a model that in turn greatly impacts the subsequent seismic processing steps. Because even small errors in first-break picking can greatly impact the seismic velocity model building, it is necessary to pick high-quality traveltimes. Whereas various artificial intelligence-based methods have been proposed to automate the process for data with medium to high signal-to-noise ratio (S/N), these methods are not applicable to low-S/N data, which still require intensive labor from skilled operators. We successfully replace 160 hours of skilled human work with 10 hours of processing by a single NVIDIA Quadro P6000 graphical processing unit by reducing the number of human picks from the usual 5%–10% to 0.19% of available gathers. High-quality inferred picks are generated by convolutional neural network-based machine learning trained from the human picks.


Geophysics ◽  
2008 ◽  
Vol 73 (6) ◽  
pp. S241-S249 ◽  
Author(s):  
Xiao-Bi Xie ◽  
Hui Yang

We have derived a broadband sensitivity kernel that relates the residual moveout (RMO) in prestack depth migration (PSDM) to velocity perturbations in the migration-velocity model. We have compared the kernel with the RMO directly measured from the migration image. The consistency between the sensitivity kernel and the measured sensitivity map validates the theory and the numerical implementation. Based on this broadband sensitivity kernel, we propose a new tomography method for migration-velocity analysis and updating — specifically, for the shot-record PSDM and shot-index common-image gather. As a result, time-consuming angle-domain analysis is not required. We use a fast one-way propagator and multiple forward scattering and single backscattering approximations to calculate the sensitivity kernel. Using synthetic data sets, we can successfully invert velocity perturbations from the migration RMO. This wave-equation-based method naturally incorporates the wave phenomena and is best teamed with the wave-equation migration method for velocity analysis. In addition, the new method maintains the simplicity of the ray-based velocity analysis method, with the more accurate sensitivity kernels replacing the rays.


Geophysics ◽  
2004 ◽  
Vol 69 (5) ◽  
pp. 1283-1298 ◽  
Author(s):  
Biondo Biondi ◽  
William W. Symes

We analyze the kinematic properties of offset‐domain common image gathers (CIGs) and angle‐domain CIGs (ADCIGs) computed by wavefield‐continuation migration. Our results are valid regardless of whether the CIGs were obtained by using the correct migration velocity. They thus can be used as a theoretical basis for developing migration velocity analysis (MVA) methods that exploit the velocity information contained in ADCIGs. We demonstrate that in an ADCIG cube, the image point lies on the normal to the apparent reflector dip that passes through the point where the source ray intersects the receiver ray. The image‐point position on the normal depends on the velocity error; when the velocity is correct, the image point coincides with the point where the source ray intersects the receiver ray. Starting from this geometric result, we derive an analytical expression for the expected movements of the image points in ADCIGs as functions of the traveltime perturbation caused by velocity errors. By applying this analytical result and assuming stationary raypaths (i.e., small velocity errors), we then derive two expressions for the residual moveout (RMO) function in ADCIGs. We verify our theoretical results and test the accuracy of the proposed RMO functions by analyzing the migration results of a synthetic data set with a wide range of reflector dips. Our kinematic analysis leads also to the development of a new method for computing ADCIGs when significant geological dips cause strong artifacts in the ADCIGs computed by conventional methods. The proposed method is based on the computation of offset‐domain CIGs along the vertical‐offset axis and on the “optimal” combination of these new CIGs with conventional CIGs. We demonstrate the need for and the advantages of the proposed method on a real data set acquired in the North Sea.


Geophysics ◽  
2003 ◽  
Vol 68 (4) ◽  
pp. 1331-1339 ◽  
Author(s):  
Tariq Alkhalifah

Prestack migration velocity analysis in the time domain reduces the velocity‐depth ambiguity usually hampering the performance of prestack depth‐migration velocity analysis. In prestack τ migration velocity analysis, we keep the interval velocity model and the output images in vertical time. This allows us to avoid placing reflectors at erroneous depths during the velocity analysis process and, thus, avoid slowing down its convergence to the true velocity model. Using a 1D velocity update scheme, the prestack τ migration velocity analysis performed well on synthetic data from a model with a complex near‐surface velocity. Accurate velocity information and images were obtained using this time‐domain method. Problems occurred only in resolving a thin layer where the low resolution and fold of the synthetic data made it practically impossible to estimate velocity accurately in this layer. This 1D approach also provided us reasonable results for synthetic data from the Marmousi model. Despite the complexity of this model, the τ domain implementation of the prestack migration velocity analysis converged to a generally reasonable result, which includes properly imaging the elusive top‐of‐the‐reservoir layer.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. V33-V43 ◽  
Author(s):  
Min Jun Park ◽  
Mauricio D. Sacchi

Velocity analysis can be a time-consuming task when performed manually. Methods have been proposed to automate the process of velocity analysis, which, however, typically requires significant manual effort. We have developed a convolutional neural network (CNN) to estimate stacking velocities directly from the semblance. Our CNN model uses two images as one input data for training. One is an entire semblance (guide image), and the other is a small patch (target image) extracted from the semblance at a specific time step. Labels for each input data set are the root mean square velocities. We generate the training data set using synthetic data. After training the CNN model with synthetic data, we test the trained model with another synthetic data that were not used in the training step. The results indicate that the model can predict a consistent velocity model. We also noticed that when the input data are extremely different from those used for the training, the CNN model will hardly pick the correct velocities. In this case, we adopt transfer learning to update the trained model (base model) with a small portion of the target data to improve the accuracy of the predicted velocity model. A marine data set from the Gulf of Mexico is used for validating our new model. The updated model performed a reasonable velocity analysis in seconds.


Sign in / Sign up

Export Citation Format

Share Document