scholarly journals First arrival Q tomography based on an adjoint-state method

Author(s):  
Xinwei Huang ◽  
Zhenbo Guo ◽  
Huawei Zhou ◽  
Yubo Yue

Abstract Under the assumption of invariant ray path in a weakly dissipative (high quality factor Q) subsurface medium, a tomographic inversion approach composed of two cascading applications of first arrival traveltime and Q tomography is proposed for compensating amplitude loss caused by near-surface anomalies, such as unconsolidated soils or the overburden gas cloud. To improve the computational efficiency, these two related tomography methods were adopted with an adjoint-state technique. First, arrival traveltime tomography will be performed to provide an inverted velocity model as one of the inputs for the following first arrival Q tomography. Then, the synthetic first break generated by the inverted velocity model will be used as a stable guidance of accessing the scopes of first arrival waveforms in the time domain where the potential attenuated time information is contained. The attenuated time will be estimated through a logarithmic spectral ratio linear regression corresponding to frequency-dependent propagation responses of different wave types. All these estimated attenuated times will be applied with reference signals to generate synthetic attenuated seismic data in the time domain, and their discrepancies with real data will be evaluated using similarity coefficients. The ones with larger values will be selected as optimal attenuated time inputs for the following Q tomographic inversion. Examples of both synthetic and field data reveal the feasibility and potential of this method.

Geophysics ◽  
2011 ◽  
Vol 76 (5) ◽  
pp. B187-B198 ◽  
Author(s):  
Kumar Ramachandran ◽  
Gilles Bellefleur ◽  
Tom Brent ◽  
Michael Riedel ◽  
Scott Dallimore

A 3D seismic survey (Mallik 3D), covering [Formula: see text] in the Mackenzie Delta area of Canada’s north, was conducted by industry in 2002. Numerous lakes and marine inundation create a complex near-surface structure in the permafrost terrain. Much of the near subsurface remains frozen but significant melt zones exist particularly from perennially unfrozen water bodies. This results in an irregular distribution of permafrost ice creating a complex pattern of low and high frequency near-surface velocity variations which induce significant traveltime distortions in surface seismic data. A high resolution 3D traveltime tomography study was employed to map the permafrost velocity structure utilizing first-arrival traveltimes picked from 3D seismic shot records. Approximately 900,000 traveltime picks from 3167 shots were used in the inversion. Tomographic inversion of the first-arrival traveltimes resulted in a smooth velocity model for the upper 200 m of the subsurface. Ray coverage in the model is excellent down to 200 m providing effective control for estimating velocities through tomographic inversion. Resolution tests conducted through horizontal and vertical checkerboard tests confirm the robustness of the velocity model in detailing small scale velocity variations. Well velocities were used to validate tomographic velocities. The tomographic velocities do not show systematic correlation with well velocities. The velocity model clearly images the permafrost velocity structure in lateral and vertical directions. It is inferred from the velocity model that the permafrost structure in the near subsurface is discontinuous. Extensions of surface water bodies in depth, characterized by low P-wave velocities, are well imaged by the velocity model. Deep lakes with unfrozen water, inferred from the tomographic velocity model, correlate with areas of strong amplitude blanking and frequency attenuation observed in processed reflection seismic stack sections.


Geophysics ◽  
2021 ◽  
pp. 1-50
Author(s):  
German Garabito ◽  
José Silas dos Santos Silva ◽  
Williams Lima

In land seismic data processing, the prestack time migration (PSTM) image remains the standard imaging output, but a reliable migrated image of the subsurface depends on the accuracy of the migration velocity model. We have adopted two new algorithms for time-domain migration velocity analysis based on wavefield attributes of the common-reflection-surface (CRS) stack method. These attributes, extracted from multicoverage data, were successfully applied to build the velocity model in the depth domain through tomographic inversion of the normal-incidence-point (NIP) wave. However, there is no practical and reliable method for determining an accurate and geologically consistent time-migration velocity model from these CRS attributes. We introduce an interactive method to determine the migration velocity model in the time domain based on the application of NIP wave attributes and the CRS stacking operator for diffractions, to generate synthetic diffractions on the reflection events of the zero-offset (ZO) CRS stacked section. In the ZO data with diffractions, the poststack time migration (post-STM) is applied with a set of constant velocities, and the migration velocities are then selected through a focusing analysis of the simulated diffractions. We also introduce an algorithm to automatically calculate the migration velocity model from the CRS attributes picked for the main reflection events in the ZO data. We determine the precision of our diffraction focusing velocity analysis and the automatic velocity calculation algorithms using two synthetic models. We also applied them to real 2D land data with low quality and low fold to estimate the time-domain migration velocity model. The velocity models obtained through our methods were validated by applying them in the Kirchhoff PSTM of real data, in which the velocity model from the diffraction focusing analysis provided significant improvements in the quality of the migrated image compared to the legacy image and to the migrated image obtained using the automatically calculated velocity model.


Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. Q27-Q36 ◽  
Author(s):  
Lele Zhang ◽  
Jan Thorbecke ◽  
Kees Wapenaar ◽  
Evert Slob

We have developed a scheme that retrieves primary reflections in the two-way traveltime domain by filtering the data. The data have their own filter that removes internal multiple reflections, whereas the amplitudes of the retrieved primary reflections are compensated for two-way transmission losses. Application of the filter does not require any model information. It consists of convolutions and correlations of the data with itself. A truncation in the time domain is applied after each convolution or correlation. The retrieved data set can be used as the input to construct a better velocity model than the one that would be obtained by working directly with the original data and to construct an enhanced subsurface image. Two 2D numerical examples indicate the effectiveness of the method. We have studied bandwidth limitations by analyzing the effects of a thin layer. The presence of refracted and scattered waves is a known limitation of the method, and we studied it as well. Our analysis indicates that a thin layer is treated as a more complicated reflector, and internal multiple reflections related to the thin layer are properly removed. We found that the presence of refracted and scattered waves generates artifacts in the retrieved data.


Author(s):  
Gleb S. Chernyshov ◽  
◽  
Anton A. Duchkov ◽  
Aleksander A. Nikitin ◽  
Ivan Yu. Kulakov ◽  
...  

The problem of tomographic inversion is non–unique and requires regularization to solve it in a stable manner. It is highly non–trivial to choose between various regularization approaches or tune the regularization parameters themselves. We study the influence of one particular regularization parameter on the resolution and accuracy the tomographic inversion for the near–surface model building. We propose another regularization parameter, which allows to increase the accuracy of model building.


Geophysics ◽  
2009 ◽  
Vol 74 (6) ◽  
pp. WCB1-WCB10 ◽  
Author(s):  
Cédric Taillandier ◽  
Mark Noble ◽  
Hervé Chauris ◽  
Henri Calandra

Classical algorithms used for traveltime tomography are not necessarily well suited for handling very large seismic data sets or for taking advantage of current supercomputers. The classical approach of first-arrival traveltime tomography was revisited with the proposal of a simple gradient-based approach that avoids ray tracing and estimation of the Fréchet derivative matrix. The key point becomes the derivation of the gradient of the misfit function obtained by the adjoint-state technique. The adjoint-state method is very attractive from a numerical point of view because the associated cost is equivalent to the solution of the forward-modeling problem, whatever the size of the input data and the number of unknown velocity parameters. An application on a 2D synthetic data set demonstrated the ability of the algorithm to image near-surface velocities with strong vertical and lateral variations and revealed the potential of the method.


Geophysics ◽  
1992 ◽  
Vol 57 (11) ◽  
pp. 1482-1492 ◽  
Author(s):  
James L. Simmons ◽  
Milo M. Backus

A linearized tomographic‐inversion algorithm estimates the near‐surface slowness anomalies present in a conventional, shallow‐marine seismic reflection data set. First‐arrival time residuals are the data to be inverted. The anomalies are treated as perturbations relative to a known, laterally‐invariant reference velocity model. Below the sea floor the reference model varies smoothly with depth; consequently the first arrivals are considered to be diving waves. In the offset‐midpoint domain the geometric patterns of traveltime perturbations produced by the anomalies resemble hyperbolas. Based on simple ray theory, these geometric patterns are predictable and can be used to relate the unknown model to the data. The assumption of a laterally‐invariant reference model permits an efficient solution in the offset‐wavenumber domain which is obtained in a single step using conventional least squares. The tomographic image shows the vertical‐traveltime perturbations associated with the anomalies as a function of midpoint at a number of depths. As implemented, the inverse problem is inherently stable. The first arrivals sample the subsurface to a maximum depth of roughly 500 m (≈ one‐fifth of the spread length). The model is parameterized to consist of fifteen 20-m thick layers spanning a depth range of 80–380 m. One‐way vertical‐traveltime delays as large as 10 ms are estimated. Assuming that these time delays are distributed over the entire 20-m thick layers, velocities much slower than water velocity are implied for the anomalies. Maps of the tomographic images show the spatial location and orientation of the anomalies throughout the prospect for the upper 400 m. Each line is processed independently, and the results are corroborated to a high degree at the line intersections.


Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. U109-U119
Author(s):  
Pengyu Yuan ◽  
Shirui Wang ◽  
Wenyi Hu ◽  
Xuqing Wu ◽  
Jiefu Chen ◽  
...  

A deep-learning-based workflow is proposed in this paper to solve the first-arrival picking problem for near-surface velocity model building. Traditional methods, such as the short-term average/long-term average method, perform poorly when the signal-to-noise ratio is low or near-surface geologic structures are complex. This challenging task is formulated as a segmentation problem accompanied by a novel postprocessing approach to identify pickings along the segmentation boundary. The workflow includes three parts: a deep U-net for segmentation, a recurrent neural network (RNN) for picking, and a weight adaptation approach to be generalized for new data sets. In particular, we have evaluated the importance of selecting a proper loss function for training the network. Instead of taking an end-to-end approach to solve the picking problem, we emphasize the performance gain obtained by using an RNN to optimize the picking. Finally, we adopt a simple transfer learning scheme and test its robustness via a weight adaptation approach to maintain the picking performance on new data sets. Our tests on synthetic data sets reveal the advantage of our workflow compared with existing deep-learning methods that focus only on segmentation performance. Our tests on field data sets illustrate that a good postprocessing picking step is essential for correcting the segmentation errors and that the overall workflow is efficient in minimizing human interventions for the first-arrival picking task.


Sign in / Sign up

Export Citation Format

Share Document