Seismic demigration/migration in the curvelet domain

Geophysics ◽  
2008 ◽  
Vol 73 (2) ◽  
pp. S35-S46 ◽  
Author(s):  
Hervé Chauris ◽  
Truong Nguyen

Curvelets can represent local plane waves. They efficiently decompose seismic images and possibly imaging operators. We study how curvelets are distorted after demigration followed by migration in a different velocity model. We show that for small local velocity perturbations, the demigration/migration is reduced to a simple morphing of the initial curvelet. The derivation of the expected curvature of the curvelets shows that it is easier to sparsify the demigration/migration operator than the migration operator. An application on a 2D synthetic data set, generated in a smooth heterogeneous velocity model and with a complex reflectivity, demonstrates the usefulness of curvelets to predict what a migrated image would become in a locally different velocity model without the need for remigrating the full input data set. Curvelets are thus well suited to study the sensitivity of a prestack depth-migrated image with respect to the heterogeneous velocity model used for migration.

Geophysics ◽  
2005 ◽  
Vol 70 (1) ◽  
pp. S1-S17 ◽  
Author(s):  
Alison E. Malcolm ◽  
Maarten V. de Hoop ◽  
Jérôme H. Le Rousseau

Reflection seismic data continuation is the computation of data at source and receiver locations that differ from those in the original data, using whatever data are available. We develop a general theory of data continuation in the presence of caustics and illustrate it with three examples: dip moveout (DMO), azimuth moveout (AMO), and offset continuation. This theory does not require knowledge of the reflector positions. We construct the output data set from the input through the composition of three operators: an imaging operator, a modeling operator, and a restriction operator. This results in a single operator that maps directly from the input data to the desired output data. We use the calculus of Fourier integral operators to develop this theory in the presence of caustics. For both DMO and AMO, we compute impulse responses in a constant-velocity model and in a more complicated model in which caustics arise. This analysis reveals errors that can be introduced by assuming, for example, a model with a constant vertical velocity gradient when the true model is laterally heterogeneous. Data continuation uses as input a subset (common offset, common angle) of the available data, which may introduce artifacts in the continued data. One could suppress these artifacts by stacking over a neighborhood of input data (using a small range of offsets or angles, for example). We test data continuation on synthetic data from a model known to generate imaging artifacts. We show that stacking over input scattering angles suppresses artifacts in the continued data.


Geophysics ◽  
2019 ◽  
Vol 85 (1) ◽  
pp. V33-V43 ◽  
Author(s):  
Min Jun Park ◽  
Mauricio D. Sacchi

Velocity analysis can be a time-consuming task when performed manually. Methods have been proposed to automate the process of velocity analysis, which, however, typically requires significant manual effort. We have developed a convolutional neural network (CNN) to estimate stacking velocities directly from the semblance. Our CNN model uses two images as one input data for training. One is an entire semblance (guide image), and the other is a small patch (target image) extracted from the semblance at a specific time step. Labels for each input data set are the root mean square velocities. We generate the training data set using synthetic data. After training the CNN model with synthetic data, we test the trained model with another synthetic data that were not used in the training step. The results indicate that the model can predict a consistent velocity model. We also noticed that when the input data are extremely different from those used for the training, the CNN model will hardly pick the correct velocities. In this case, we adopt transfer learning to update the trained model (base model) with a small portion of the target data to improve the accuracy of the predicted velocity model. A marine data set from the Gulf of Mexico is used for validating our new model. The updated model performed a reasonable velocity analysis in seconds.


2019 ◽  
Vol 217 (3) ◽  
pp. 1727-1741 ◽  
Author(s):  
D W Vasco ◽  
Seiji Nakagawa ◽  
Petr Petrov ◽  
Greg Newman

SUMMARY We introduce a new approach for locating earthquakes using arrival times derived from waveforms. The most costly computational step of the algorithm scales as the number of stations in the active seismographic network. In this approach, a variation on existing grid search methods, a series of full waveform simulations are conducted for all receiver locations, with sources positioned successively at each station. The traveltime field over the region of interest is calculated by applying a phase picking algorithm to the numerical wavefields produced from each simulation. An event is located by subtracting the stored traveltime field from the arrival time at each station. This provides a shifted and time-reversed traveltime field for each station. The shifted and time-reversed fields all approach the origin time of the event at the source location. The mean or median value at the source location thus approximates the event origin time. Measures of dispersion about this mean or median time at each grid point, such as the sample standard error and the average deviation, are minimized at the correct source position. Uncertainty in the event position is provided by the contours of standard error defined over the grid. An application of this technique to a synthetic data set indicates that the approach provides stable locations even when the traveltimes are contaminated by additive random noise containing a significant number of outliers and velocity model errors. It is found that the waveform-based method out-performs one based upon the eikonal equation for a velocity model with rapid spatial variations in properties due to layering. A comparison with conventional location algorithms in both a laboratory and field setting demonstrates that the technique performs at least as well as existing techniques.


Geophysics ◽  
2019 ◽  
Vol 84 (3) ◽  
pp. R411-R427 ◽  
Author(s):  
Gang Yao ◽  
Nuno V. da Silva ◽  
Michael Warner ◽  
Di Wu ◽  
Chenhao Yang

Full-waveform inversion (FWI) is a promising technique for recovering the earth models for exploration geophysics and global seismology. FWI is generally formulated as the minimization of an objective function, defined as the L2-norm of the data residuals. The nonconvex nature of this objective function is one of the main obstacles for the successful application of FWI. A key manifestation of this nonconvexity is cycle skipping, which happens if the predicted data are more than half a cycle away from the recorded data. We have developed the concept of intermediate data for tackling cycle skipping. This intermediate data set is created to sit between predicted and recorded data, and it is less than half a cycle away from the predicted data. Inverting the intermediate data rather than the cycle-skipped recorded data can then circumvent cycle skipping. We applied this concept to invert cycle-skipped first arrivals. First, we picked up the first breaks of the predicted data and the recorded data. Second, we linearly scaled down the time difference between the two first breaks of each shot into a series of time shifts, the maximum of which was less than half a cycle, for each trace in this shot. Third, we moved the predicted data with the corresponding time shifts to create the intermediate data. Finally, we inverted the intermediate data rather than the recorded data. Because the intermediate data are not cycle-skipped and contain the traveltime information of the recorded data, FWI with intermediate data updates the background velocity model in the correct direction. Thus, it produces a background velocity model accurate enough for carrying out conventional FWI to rebuild the intermediate- and short-wavelength components of the velocity model. Our numerical examples using synthetic data validate the intermediate-data concept for tackling cycle skipping and demonstrate its effectiveness for the application to first arrivals.


2017 ◽  
Vol 5 (3) ◽  
pp. SJ81-SJ90 ◽  
Author(s):  
Kainan Wang ◽  
Jesse Lomask ◽  
Felix Segovia

Well-log-to-seismic tying is a key step in many interpretation workflows for oil and gas exploration. Synthetic seismic traces from the wells are often manually tied to seismic data; this process can be very time consuming and, in some cases, inaccurate. Automatic methods, such as dynamic time warping (DTW), can match synthetic traces to seismic data. Although these methods are extremely fast, they tend to create interval velocities that are not geologically realistic. We have described the modification of DTW to create a blocked dynamic warping (BDW) method. BDW generates an automatic, optimal well tie that honors geologically consistent velocity constraints. Consequently, it results in updated velocities that are more realistic than other methods. BDW constrains the updated velocity to be constant or linearly variable inside each geologic layer. With an optimal correlation between synthetic seismograms and surface seismic data, this algorithm returns an automatically updated time-depth curve and an updated interval velocity model that still retains the original geologic velocity boundaries. In other words, the algorithm finds the optimal solution for tying the synthetic to the seismic data while restricting the interval velocity changes to coincide with the initial input blocking. We have determined the application of the BDW technique on a synthetic data example and field data set.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. WB109-WB118 ◽  
Author(s):  
Jonathan Liu ◽  
Gopal Palacharla

Kirchhoff-type prestack depth migration is the method most popular for outputting offset gathers for velocity-model updating because of its flexibility and efficiency. However, conventional implementations of Kirchhoff migration use only single arrivals. This limits its ability to image complex structures such as subsalt areas. We use the beam methodology to develop a multiarrival Kirchhoff beam migration. The theory and algorithm of our beam migration are analogs to Gaussian beam migration, but we focus on attaining kinematic accuracy and implementation efficiency. The input wavefield of every common offset panel is decomposed into local plane waves at beam centers on the acquisition surface by local slant stacking. Each plane wave contributes a potential single-arrival in Kirchhoff migration. In this way, our method is able to handle multiarrivals caused by model complexity and, therefore, to overcome the limitation of conventional single-arrival Kirchhoff migration. The choice of the width of the beam is critical to the implementation of beam migration. We provide a formula for optimal beam width that achieves both accuracy and efficiency when the velocity model is reasonably smooth. The resulting structural imaging in subsalt and other structurally complex areas is of better quality than that from single-arrival Kirchhoff migration.


Geophysics ◽  
2010 ◽  
Vol 75 (6) ◽  
pp. WB95-WB102 ◽  
Author(s):  
William Curry

Many interpolation methods are effective with regularly sampled or randomly sampled data, whereas the spatial sampling of seismic reflectivity data is typically neither regular nor random. Fourier-radial adaptive thresholding (FRAT) is a sparsity-promoting method in which the interpolated result is sparse in the frequency-wavenumber domain and is coherent in a manner consistent with that of a collection of unaliased plane waves. The sparsity and the desired pattern in the [Formula: see text] domain are promoted by iterative soft thresholding and adaptive weighting; data in the [Formula: see text] domain are transformed to polar coordinates and then low-pass filtered along the radial axis to generate the nonlinear weight. FRAT interpolates data that are randomly sampled and aliased; i.e., where the minimum distance between adjacent traces is greater than the Nyquist sampling interval. A conventional approach to solving this problem is to apply a cascade of two procedures: first a sparsity-based method, such as projection onto convex sets (POCS) to interpolate the data onto a regularly sampled but aliased grid, followed by a “beyond aliasing” approach such as Gülünay [Formula: see text] interpolation to further interpolate the regularly sampled POCS result. In a simple synthetic example of two dipping plane waves with irregular, aliased sampling, FRAT outperformed this cascaded approach. In another experiment, the Sigsbee2A prestack synthetic data set was sampled using the source geometry from a 3D offshore survey where POCS will have difficulty with the semiregularity of this sampling pattern. FRAT produced results superior to those of POCS before and after the data were migrated.


Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. WB191-WB207 ◽  
Author(s):  
Yaxun Tang ◽  
Biondo Biondi

We present a new strategy for efficient wave-equation migration-velocity analysis in complex geological settings. The proposed strategy has two main steps: simulating a new data set using an initial unfocused image and performing wavefield-based tomography using this data set. We demonstrated that the new data set can be synthesized by using generalized Born wavefield modeling for a specific target region where velocities are inaccurate. We also showed that the new data set can be much smaller than the original one because of the target-oriented modeling strategy, but it contains necessary velocity information for successful velocity analysis. These interesting features make this new data set suitable for target-oriented, fast and interactive velocity model-building. We demonstrate the performance of our method on both a synthetic data set and a field data set acquired from the Gulf of Mexico, where we update the subsalt velocity in a target-oriented fashion and obtain a subsalt image with improved continuities, signal-to-noise ratio and flattened angle-domain common-image gathers.


Geophysics ◽  
2017 ◽  
Vol 82 (4) ◽  
pp. S307-S314 ◽  
Author(s):  
Yibo Wang ◽  
Yikang Zheng ◽  
Qingfeng Xue ◽  
Xu Chang ◽  
Tong W. Fei ◽  
...  

In the implementation of migration of multiples, reverse time migration (RTM) is superior to other migration algorithms because it can handle steeply dipping structures and offer high-resolution images of the complex subsurface. However, the RTM results using two-way wave equation contain high-amplitude, low-frequency noise and false images generated by improper wave paths in migration velocity model with sharp velocity interfaces or strong velocity gradients. To improve the imaging quality in RTM of multiples, we separate the upgoing and downgoing waves in the propagation of source and receiver wavefields. A complex function involved with the Hilbert transform is used in wavefield decomposition. Our approach is cost effective and avoids the large storage of wavefield snapshots required by the conventional wavefield separation technique. We applied migration of multiples with wavefield decomposition on a simple two-layer model and the Sigsbee 2B synthetic data set. Our results demonstrate that the proposed approach can improve the image generated by migration of multiples significantly.


Sign in / Sign up

Export Citation Format

Share Document