Resolution in crosswell traveltime tomography: The dependence on illumination

Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. W1-W12 ◽  
Author(s):  
Renato R. S. Dantas ◽  
Walter E. Medeiros

The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well-known result but not well-exemplified. We have revisited resolution in the 2D case using a simple geometric approach based on the angular aperture distribution and the Radon transform properties. We have analytically found that if an isolated interface had dips contained in the angular aperture limits, it could be reconstructed using just one particular projection. By inversion of synthetic data, we found that a slowness field could be approximately reconstructed from a set of projections if the interfaces delimiting the slowness field had dips contained in the available angular apertures. On the one hand, isolated artifacts might be present when the dip is near the illumination limit. On the other hand, in the inverse sense, if an interface is interpretable from a tomogram, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region, it is diffusely imaged, but its interfaces, particularly vertical edges, cannot be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body, because this anomaly could be an artifact. These results are typical of ill-posed inverse problems: an absence of a guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of constraints. Crosswell tomograms derived with the use of sparsity constraints, using the discrete cosine transform and Daubechies bases, essentially reproduce the same features seen in tomograms obtained with the smoothness constraint. Interpretation must be done taking into consideration a priori information and the particular limitations due to illumination, as we have determined with a real data case.

Geophysics ◽  
2008 ◽  
Vol 73 (5) ◽  
pp. VE337-VE351 ◽  
Author(s):  
Kenneth P. Bube ◽  
Robert T. Langan

In most geometries in which seismic-traveltime tomography is applied (e.g., crosswell, surface-reflection, and VSP), determination of the slowness field using only traveltimes is not a well-conditioned problem. Nonuniqueness is common. Even when the slowness field is uniquely determined, small changes in measured traveltimes can cause large errors in the computed slowness field. A priori information often is available — well logs, initial rough estimates of slowness from structural geology, etc. — and can be incorporated into a traveltime-inversion algorithm by using penalty terms. To further regularize the problem, smoothing constraints also can be incorporated using penalty terms by penalizing derivatives of the slowness field. What weights to use on the penalty terms is a major decision, particularly the smoothing-penalty weights. We use a continuation approach in selecting the smoothing-penalty weights. Instead of using fixed smoothing-penalty weights, we decrease them step by step, using the slowness model computed with the previous, larger weights as the initial slowness model for the next step with the new, smaller weights. This continuation approach can solve synthetic problems more accurately than does one that uses fixed smoothing-penalty weights, and it appears to yield more features of interest in real-data applications of traveltime tomography. We have formulated guidelines for making the many choices needed to implement this continuation strategy effectively and have developed specific choices for crosswell-traveltime tomography.


Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. S11-S18 ◽  
Author(s):  
Juefu Wang ◽  
Mauricio D. Sacchi

We propose a new scheme for high-resolution amplitude-variation-with-ray-parameter (AVP) imaging that uses nonquadratic regularization. We pose migration as an inverse problem and propose a cost function that uses a priori information about common-image gathers (CIGs). In particular, we introduce two regularization constraints: smoothness along the offset-ray-parameter axis and sparseness in depth. The two-step regularization yields high-resolution CIGs with robust estimates of AVP. We use an iterative reweighted least-squares conjugate gradient algorithm to minimize the cost function of the problem. We test the algorithm with synthetic data (a wedge model and the Marmousi data set) and a real data set (Erskine area, Alberta). Tests show our method helps to enhance the vertical resolution of CIGs and improves amplitude accuracy along the ray-parameter direction.


2012 ◽  
Vol 5 (4) ◽  
pp. 831-841 ◽  
Author(s):  
B. Funke ◽  
T. von Clarmann

Abstract. Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach ten percent or more. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in an unpredictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.


Author(s):  
Ye Zhang ◽  
Dmitry V. Lukyanenko ◽  
Anatoly G. Yagola

AbstractIn this article, we consider an inverse problem for the integral equation of the convolution type in a multidimensional case. This problem is severely ill-posed. To deal with this problem, using a priori information (sourcewise representation) based on optimal recovery theory we propose a new method. The regularization and optimization properties of this method are proved. An optimal minimal a priori error of the problem is found. Moreover, a so-called optimal regularized approximate solution and its corresponding error estimation are considered. Efficiency and applicability of this method are demonstrated in a numerical example of the image deblurring problem with noisy data.


2011 ◽  
Vol 4 (6) ◽  
pp. 7159-7183 ◽  
Author(s):  
B. Funke ◽  
T. von Clarmann

Abstract. Calculation of mean trace gas contributions from profiles obtained by retrievals of the logarithm of the abundance rather than retrievals of the abundance itself are prone to biases. By means of a system simulator, biases of linear versus logarithmic averaging were evaluated for both maximum likelihood and maximum a priori retrievals, for various signal to noise ratios and atmospheric variabilities. These biases can easily reach several ten percent. As a rule of thumb we found for maximum likelihood retrievals that linear averaging better represents the true mean value in cases of large local natural variability and high signal to noise ratios, while for small local natural variability logarithmic averaging often is superior. In the case of maximum a posteriori retrievals, the mean is dominated by the a priori information used in the retrievals and the method of averaging is of minor concern. For larger natural variabilities, the appropriateness of the one or the other method of averaging depends on the particular case because the various biasing mechanisms partly compensate in a hardly predictable manner. This complication arises mainly because of the fact that in logarithmic retrievals the weight of the prior information depends on abundance of the gas itself. No simple rule was found on which kind of averaging is superior, and instead of suggesting simple recipes we cannot do much more than to create awareness of the traps related with averaging of mixing ratios obtained from logarithmic retrievals.


Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2013 ◽  
Vol 78 (6) ◽  
pp. J87-J98 ◽  
Author(s):  
Felipe F. Melo ◽  
Valeria C. F. Barbosa ◽  
Leonardo Uieda ◽  
Vanderlei C. Oliveira Jr. ◽  
João B. C. Silva

We have developed a new method that drastically reduces the number of the source location estimates in Euler deconvolution to only one per anomaly. Our method employs the analytical estimators of the base level and of the horizontal and vertical source positions in Euler deconvolution as a function of the [Formula: see text]- and [Formula: see text]-coordinates of the observations. By assuming any tentative structural index (defining the geometry of the sources), our method automatically locates plateaus, on the maps of the horizontal coordinate estimates, indicating consistent estimates that are very close to the true corresponding coordinates. These plateaus are located in the neighborhood of the highest values of the anomaly and show a contrasting behavior with those estimates that form inclined planes at the anomaly borders. The plateaus are automatically located on the maps of the horizontal coordinate estimates by fitting a first-degree polynomial to these estimates in a moving-window scheme spanning all estimates. The positions where the angular coefficient estimates are closest to zero identify the plateaus of the horizontal coordinate estimates. The sample means of these horizontal coordinate estimates are the best horizontal location estimates. After mapping each plateau, our method takes as the best structural index the one that yields the minimum correlation between the total-field anomaly and the estimated base level over each plateau. By using the estimated structural index for each plateau, our approach extracts the vertical coordinate estimates over the corresponding plateau. The sample means of these estimates are the best depth location estimates in our method. When applied to synthetic data, our method yielded good results if the bodies produce weak- and mid-interfering anomalies. A test on real data over intrusions in the Goiás Alkaline Province, Brazil, retrieved sphere-like sources suggesting 3D bodies.


2020 ◽  
Author(s):  
Nicola Zoppetti ◽  
Simone Ceccherini ◽  
Flavio Barbara ◽  
Samuele Del Bianco ◽  
Marco Gai ◽  
...  

<p>Remote sounding of atmospheric composition makes use of satellite measurements with very heterogeneous characteristics. In particular, the determination of vertical profiles of gases in the atmosphere can be performed using measurements acquired in different spectral bands and with different observation geometries. The most rigorous way to combine heterogeneous measurements of the same quantity in a single Level 2 (L2) product is simultaneous retrieval. The main drawback of simultaneous retrieval is its complexity, due to the necessity to embed the forward models of different instruments into the same retrieval application. To overcome this shortcoming, we developed a data fusion method, referred to as Complete Data Fusion (CDF), to provide an efficient and adaptable alternative to simultaneous retrieval. In general, the CDF input is any number of profiles retrieved with the optimal estimation technique, characterized by their a priori information, covariance matrix (CM), and averaging kernel (AK) matrix. The output of the CDF is a single product also characterized by an a priori, a CM and an AK matrix, which collect all the available information content. To account for the geo-temporal differences and different vertical grids of the fusing profiles, a coincidence and an interpolation error have to be included in the error budget.<br>In the first part of the work, the CDF method is applied to ozone profiles simulated in the thermal infrared and ultraviolet bands, according to the specifications of the Sentinel 4 (geostationary) and Sentinel 5 (low Earth orbit) missions of the Copernicus program. The simulated data have been produced in the context of the Advanced Ultraviolet Radiation and Ozone Retrieval for Applications (AURORA) project funded by the European Commission in the framework of the Horizon 2020 program. The use of synthetic data and the assumption of negligible systematic error in the simulated measurements allow studying the behavior of the CDF in ideal conditions. The use of synthetic data allows evaluating the performance of the algorithm also in terms of differences between the products of interest and the reference truth, represented by the atmospheric scenario used in the procedure to simulate the L2 products. This analysis aims at demonstrating the potential benefits of the CDF for the synergy of products measured by different platforms in a close future realistic scenario, when the Sentinel 4, 5/5p ozone profiles will be available.<br>In the second part of this work, the CDF is applied to a set of real measurements of ozone acquired by GOME-2 onboard the MetOp-B platform. The quality of the CDF products, obtained for the first time from operational products, is compared with that of the original GOME-2 products. This aims to demonstrate the concrete applicability of the CDF to real data and its possible use to generate Level-3 (or higher) gridded products.<br>The results discussed in this presentation offer a first consolidated picture of the actual and potential value of an innovative technique for post-retrieval processing and generation of Level-3 (or higher) products from the atmospheric Sentinel data.</p>


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. WB19-WB35 ◽  
Author(s):  
Cyril Schamper ◽  
Fayçal Rejiba ◽  
Roger Guérin

Electromagnetic induction (EMI) methods are widely used to determine the distribution of the electrical conductivity and are well adapted to the delimitation of aquifers and clayey layers because the electromagnetic field is strongly perturbed by conductive media. The multicomponent EMI device that was used allowed the three components of the secondary magnetic field (the radial [Formula: see text], the tangential [Formula: see text], and the vertical [Formula: see text]) to be measured at 10 frequencies ranging from 110 to 56 kHz in one single sounding with offsets ranging from 20 to 400 m. In a continuing endeavor to improve the reliability with which the thickness and conductivity are inverted, we focused our research on the use of components other than the vertical magnetic field Hz. Because a separate sensitivity analysis of [Formula: see text] and [Formula: see text] suggests that [Formula: see text] is more sensitive to variations in the thickness of a near-surface conductive layer, we developed an inversion tool able to make single-sounding and laterally constrained 1D interpretation of both components jointly, associated with an adapted random search algorithm for single-sounding processing for which almost no a priori information is available. Considering the complementarity of [Formula: see text] and [Formula: see text] components, inversion tests of clean and noisy synthetic data showed an improvement in the definition of the thickness of a near-surface conductive layer. This inversion code was applied to the karst site of the basin of Fontaine-Sous-Préaux, near Rouen (northwest of France). Comparison with an electrical resistivity tomography tends to confirm the reliability of the interpretation from the EMI data with the developed inversion tool.


Sign in / Sign up

Export Citation Format

Share Document