Nonlinear 3-D traveltime inversion of crosshole data with an application in the area of the Middle Ural Mountains

Geophysics ◽  
2001 ◽  
Vol 66 (2) ◽  
pp. 627-636 ◽  
Author(s):  
Pantelis M. Soupios ◽  
Constantinos B. Papazachos ◽  
Christopher Juhlin ◽  
Gregory N. Tsokas

This paper deals with the problem of nonlinear seismic velocity estimation from first‐arrival traveltimes obtained from crosshole and downhole experiments in three dimensions. A standard tomographic procedure is applied, based on the representation of the crosshole area into a number of cells which have an initial slowness assigned. For the forward modeling, the raypath matrix is computed using the revisited ray bending method, supplemented by an approximate computation of the first Fresnel zone at each point of the ray, hence using physical and not only mathematical rays. Since 3-D ray tracing is incorporated, the inversion technique is nonlinear. Velocity images are obtained by a constrained least‐squares inversion scheme using both “damping” and “smoothing” factors. The appropriate choice of these factors is defined by the use of appropriate criteria such as the L-curve. The tomographic approach is improved by incorporating a priori information about the media to be imaged into our inversion scheme. This improvement in imaging is achieved by projecting a desirable solution onto the null space of the inversion, and including this null‐space contribution with the standard non‐null‐space inversion solution. The efficiency of the inversion scheme is tested through a series of tests with synthetic data. Moreover, application in the area of the Ural Mountains using real data demonstrates that the proposed technique produces more realistic velocity models than those obtained by other standard approaches.

Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2008 ◽  
Vol 73 (5) ◽  
pp. VE205-VE210 ◽  
Author(s):  
Maria Cameron ◽  
Sergey Fomel ◽  
James Sethian

The objective was to build an efficient algorithm (1) to estimate seismic velocity from time-migration velocity, and (2) to convert time-migrated images to depth. We established theoretical relations between the time-migration velocity and seismic velocity in two and three dimensions using paraxial ray-tracing theory. The relation in two dimensions implies that the conventional Dix velocity is the ratio of the interval seismic velocity and the geometric spreading of image rays. We formulated an inverse problem of finding seismic velocity from the Dix velocity and developed a numerical procedure for solving it. The procedure consists of two steps: (1) computation of the geometric spreading of image rays and the true seismic velocity in time-domain coordinates from the Dix velocity; (2) conversion of the true seismic velocity from the time domain to the depth domain and computation of the transition matrices from time-domain coordinates todepth. For step 1, we derived a partial differential equation (PDE) in two and three dimensions relating the Dix velocity and the geometric spreading of image rays to be found. This is a nonlinear elliptic PDE. The physical setting allows us to pose a Cauchy problem for it. This problem is ill posed, but we can solve it numerically in two ways on the required interval of time, if it is sufficiently short. One way is a finite-difference scheme inspired by the Lax-Friedrichs method. The second way is a spectral Chebyshev method. For step 2, we developed an efficient Dijkstra-like solver motivated by Sethian’s fast marching method. We tested numerical procedures on a synthetic data example and applied them to a field data example. We demonstrated that the algorithms produce a significantly more accurate estimate of seismic velocity than the conventional Dix inversion. This velocity estimate can be used as a reasonable first guess in building velocity models for depth imaging.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


Geophysics ◽  
2021 ◽  
pp. 1-35
Author(s):  
M. Javad Khoshnavaz

Building an accurate velocity model plays a vital role in routine seismic imaging workflows. Normal-moveout-based seismic velocity analysis is a popular method to make the velocity models. However, traditional velocity analysis methodologies are not generally capable of handling amplitude variations across moveout curves, specifically polarity reversals caused by amplitude-versus-offset anomalies. I present a normal-moveout-based velocity analysis approach that circumvents this shortcoming by modifying the conventional semblance function to include polarity and amplitude correction terms computed using correlation coefficients of seismic traces in the velocity analysis scanning window with a reference trace. Thus, the proposed workflow is suitable for any class of amplitude-versus-offset effects. The approach is demonstrated to four synthetic data examples of different conditions and a field data consisting a common-midpoint gather. Lateral resolution enhancement using the proposed workflow is evaluated by comparison between the results from the workflow and the results obtained by the application of conventional semblance and three semblance-based velocity analysis algorithms developed to circumvent the challenges associated with amplitude variations across moveout curves, caused by seismic attenuation and class II amplitude-versus-offset anomalies. According to the obtained results, the proposed workflow is superior to all the presented workflows in handling such anomalies.


2020 ◽  
Author(s):  
Nicola Zoppetti ◽  
Simone Ceccherini ◽  
Flavio Barbara ◽  
Samuele Del Bianco ◽  
Marco Gai ◽  
...  

<p>Remote sounding of atmospheric composition makes use of satellite measurements with very heterogeneous characteristics. In particular, the determination of vertical profiles of gases in the atmosphere can be performed using measurements acquired in different spectral bands and with different observation geometries. The most rigorous way to combine heterogeneous measurements of the same quantity in a single Level 2 (L2) product is simultaneous retrieval. The main drawback of simultaneous retrieval is its complexity, due to the necessity to embed the forward models of different instruments into the same retrieval application. To overcome this shortcoming, we developed a data fusion method, referred to as Complete Data Fusion (CDF), to provide an efficient and adaptable alternative to simultaneous retrieval. In general, the CDF input is any number of profiles retrieved with the optimal estimation technique, characterized by their a priori information, covariance matrix (CM), and averaging kernel (AK) matrix. The output of the CDF is a single product also characterized by an a priori, a CM and an AK matrix, which collect all the available information content. To account for the geo-temporal differences and different vertical grids of the fusing profiles, a coincidence and an interpolation error have to be included in the error budget.<br>In the first part of the work, the CDF method is applied to ozone profiles simulated in the thermal infrared and ultraviolet bands, according to the specifications of the Sentinel 4 (geostationary) and Sentinel 5 (low Earth orbit) missions of the Copernicus program. The simulated data have been produced in the context of the Advanced Ultraviolet Radiation and Ozone Retrieval for Applications (AURORA) project funded by the European Commission in the framework of the Horizon 2020 program. The use of synthetic data and the assumption of negligible systematic error in the simulated measurements allow studying the behavior of the CDF in ideal conditions. The use of synthetic data allows evaluating the performance of the algorithm also in terms of differences between the products of interest and the reference truth, represented by the atmospheric scenario used in the procedure to simulate the L2 products. This analysis aims at demonstrating the potential benefits of the CDF for the synergy of products measured by different platforms in a close future realistic scenario, when the Sentinel 4, 5/5p ozone profiles will be available.<br>In the second part of this work, the CDF is applied to a set of real measurements of ozone acquired by GOME-2 onboard the MetOp-B platform. The quality of the CDF products, obtained for the first time from operational products, is compared with that of the original GOME-2 products. This aims to demonstrate the concrete applicability of the CDF to real data and its possible use to generate Level-3 (or higher) gridded products.<br>The results discussed in this presentation offer a first consolidated picture of the actual and potential value of an innovative technique for post-retrieval processing and generation of Level-3 (or higher) products from the atmospheric Sentinel data.</p>


Geophysics ◽  
1994 ◽  
Vol 59 (2) ◽  
pp. 297-308 ◽  
Author(s):  
Pierre D. Thore ◽  
Eric de Bazelaire ◽  
Marisha P. Rays

We compare the three‐term equation to the normal moveout (NMO) equation for several synthetic data sets to analyze whether or not it is worth making the additional computational effort in the stacking process within various exploration contexts. In our evaluation we have selected two criteria: 1)The quality of the stacked image. 2) The reliability of the stacking parameters and their usefulness for further computation such as interval velocity estimation. We have simulated the stacking process very precisely, despite using only the traveltimes and not the full waveform data. The procedure searches for maximum coherency along the traveltime curve rather than a least‐square regression to it. This technique, which we call the Gaussian‐weighted least square, avoids most of the shortcomings of the least‐square method. The following are our conclusions: 1) The three term equation gives a better stack than the regular NMO. The increase in stacking energy can be more than 30 percent. 2)The calculation of interval velocities using a DIX formula rewritten for the three‐parameter equation is much more stable and accurate than the standard DIX formula. 3) The search for the three parameters is feasible in an efficient way since the shifted hyperbola requires only static corrections rather than dy namic ones. 4) Noise alters the parameters of the maximum energy stack in a way that depends on the noise type. The estimates obtained remain accurate enough for interval velocity estimation (where only two parameters are needed), but the use of the three parameters in direct inversion may be hazardous because of noise corruption. These conclusions should, however, be verified on real data examples.


Geophysics ◽  
2008 ◽  
Vol 73 (5) ◽  
pp. VE35-VE38 ◽  
Author(s):  
Jonathan Liu ◽  
Lorie Bear ◽  
Jerry Krebs ◽  
Raffaella Montelli ◽  
Gopal Palacharla

We have developed a new method to build seismic velocity models for complex structures. In our approach, we use a spatially nonuniform parameterization of the velocity model in tomography and a uniform grid representation of the same velocity model in ray tracing to generate the linear system of tomographic equations. Subsequently, a matrix transformation is applied to the system of equations to produce a new linear system of tomographic equations using nonuniform parameterization. In this way, we improved the stability of tomographic inversion without adding computing costs. We tested the effectiveness of our process on a 3D synthetic data example.


2019 ◽  
Author(s):  
Ahmad Ilham

Determining the number of clusters k-Means is the most populer problem among data mining researchers because of the difficulty to determining information from the data a priori so that the results cluster un optimal and to be quickly trapped into local minimums. Automatic clustering method with evolutionary computation (EC) approach can solve the k-Means problem. The automatic clustering differential evolution (ACDE) method is one of the most popular methods of the EC approach because it can handle high-dimensional data and improve k-Means drafting performance with low cluster validity values. However, the process of determining k activation threshold on ACDE is still dependent on user considerations, so that the process of determining the number of k-Means clusters is not yet efficient. In this study, the ACDE problem will be improved using the u-control chart (UCC) method, which is proven to be efficiently used to solve k-Means problems automatically. The proposed method is evaluated using the state-of-the-art datasets such as synthetic data and real data (iris, glass, wine, vowel, ruspini) from UCI repository machine learning and using davies bouldin index (DBI) and cosine similarity measure (CS) as an evaluation method. The results of this study indicate that the UCC method has successfully improved the k-Means method with the lowest objective values of DBI and CS of 0.470 and 0.577 respectively. The lowest objective value of DBI and CS is the best method. The proposed method has high performance when compared with other current methods such as genetic clustering for unknown k (GCUK), dynamic clustering pso (DCPSO) and automatic clustering approach based on differential evolution algorithm combining with k-Means for crisp clustering (ACDE) for almost all DBI and CS evaluations. It can be concluded that the UCC method is able to correct the weakness of the ACDE method on determining the number of k-Means clusters by automatically determining k activation threshold


Geophysics ◽  
1998 ◽  
Vol 63 (6) ◽  
pp. 2054-2062 ◽  
Author(s):  
Irene Kelly ◽  
Larry R. Lines

Accurate imaging of seismic reflectors with depth migration requires accurate velocity models. In frontier areas with few well constraints, velocity estimation generally involves the use of methods such as normal moveout analysis, seismic traveltime tomography, or iterative prestack depth migration. These techniques can be effective, but may also be expensive or time‐consuming. In situations where we have information on formation tops from a series of wells which intersect seismic reflectors, we use a least‐squares optimization method to estimate velocity models. This method produces velocity models that optimize depth migrations in terms of well constraints by using least‐squares inversion to match the depth migration images to formation tops. The well log information is used to optimize poststack migration, thereby eliminating some of the time and expense of velocity analysis. In addition to applying an inversion method which optimizes depth migration in terms of formation tops, we can use a sensitivity analysis method of “most‐squares inversion” to explore a range of velocity models which provide mathematically acceptable solutions. This sensitivity analysis quantifies the expected result that our velocity estimates are generally less reliable for thin beds than for thick beds. The proposed optimization method is shown to be successful on synthetic and real data cases from the Hibernia Field of offshore Newfoundland.


Geophysics ◽  
2003 ◽  
Vol 68 (3) ◽  
pp. 1008-1021 ◽  
Author(s):  
Frederic Billette ◽  
Soazig Le Bégat ◽  
Pascal Podvin ◽  
Gilles Lambaré

Stereotomography is a new velocity estimation method. This tomographic approach aims at retrieving subsurface velocities from prestack seismic data. In addition to traveltimes, the slope of locally coherent events are picked simultaneously in common offset, common source, common receiver, and common midpoint gathers. As the picking is realized on locally coherent events, they do not need to be interpreted in terms of reflection on given interfaces, but may represent diffractions or reflections from anywhere in the image. In the high‐frequency approximation, each one of these events corresponds to a ray trajectory in the subsurface. Stereotomography consists of picking and analyzing these events to update both the associated ray paths and velocity model. In this paper, we describe the implementation of two critical features needed to put stereotomography into practice: an automatic picking tool and a robust multiscale iterative inversion technique. Applications to 2D reflection seismic are presented on synthetic data and on a 2D line extracted from a 3D towed streamer survey shot in West Africa for TotalFinaElf. The examples demonstrate that the method requires only minor human intervention and rapidly converges to a geologically plausible velocity model in these two very different and complex velocity regimes. The quality of the velocity models is verified by prestack depth migration results.


Sign in / Sign up

Export Citation Format

Share Document