High-resolution wave-equation amplitude-variation-with-ray-parameter (AVP) imaging with sparseness constraints

Geophysics ◽  
2007 ◽  
Vol 72 (1) ◽  
pp. S11-S18 ◽  
Author(s):  
Juefu Wang ◽  
Mauricio D. Sacchi

We propose a new scheme for high-resolution amplitude-variation-with-ray-parameter (AVP) imaging that uses nonquadratic regularization. We pose migration as an inverse problem and propose a cost function that uses a priori information about common-image gathers (CIGs). In particular, we introduce two regularization constraints: smoothness along the offset-ray-parameter axis and sparseness in depth. The two-step regularization yields high-resolution CIGs with robust estimates of AVP. We use an iterative reweighted least-squares conjugate gradient algorithm to minimize the cost function of the problem. We test the algorithm with synthetic data (a wedge model and the Marmousi data set) and a real data set (Erskine area, Alberta). Tests show our method helps to enhance the vertical resolution of CIGs and improves amplitude accuracy along the ray-parameter direction.

Geophysics ◽  
1993 ◽  
Vol 58 (1) ◽  
pp. 91-100 ◽  
Author(s):  
Claude F. Lafond ◽  
Alan R. Levander

Prestack depth migration still suffers from the problems associated with building appropriate velocity models. The two main after‐migration, before‐stack velocity analysis techniques currently used, depth focusing and residual moveout correction, have found good use in many applications but have also shown their limitations in the case of very complex structures. To address this issue, we have extended the residual moveout analysis technique to the general case of heterogeneous velocity fields and steep dips, while keeping the algorithm robust enough to be of practical use on real data. Our method is not based on analytic expressions for the moveouts and requires no a priori knowledge of the model, but instead uses geometrical ray tracing in heterogeneous media, layer‐stripping migration, and local wavefront analysis to compute residual velocity corrections. These corrections are back projected into the velocity model along raypaths in a way that is similar to tomographic reconstruction. While this approach is more general than existing migration velocity analysis implementations, it is also much more computer intensive and is best used locally around a particularly complex structure. We demonstrate the technique using synthetic data from a model with strong velocity gradients and then apply it to a marine data set to improve the positioning of a major fault.


Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. W1-W12 ◽  
Author(s):  
Renato R. S. Dantas ◽  
Walter E. Medeiros

The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well-known result but not well-exemplified. We have revisited resolution in the 2D case using a simple geometric approach based on the angular aperture distribution and the Radon transform properties. We have analytically found that if an isolated interface had dips contained in the angular aperture limits, it could be reconstructed using just one particular projection. By inversion of synthetic data, we found that a slowness field could be approximately reconstructed from a set of projections if the interfaces delimiting the slowness field had dips contained in the available angular apertures. On the one hand, isolated artifacts might be present when the dip is near the illumination limit. On the other hand, in the inverse sense, if an interface is interpretable from a tomogram, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region, it is diffusely imaged, but its interfaces, particularly vertical edges, cannot be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body, because this anomaly could be an artifact. These results are typical of ill-posed inverse problems: an absence of a guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of constraints. Crosswell tomograms derived with the use of sparsity constraints, using the discrete cosine transform and Daubechies bases, essentially reproduce the same features seen in tomograms obtained with the smoothness constraint. Interpretation must be done taking into consideration a priori information and the particular limitations due to illumination, as we have determined with a real data case.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. E293-E299
Author(s):  
Jorlivan L. Correa ◽  
Paulo T. L. Menezes

Synthetic data provided by geoelectric earth models are a powerful tool to evaluate a priori a controlled-source electromagnetic (CSEM) workflow effectiveness. Marlim R3D (MR3D) is an open-source complex and realistic geoelectric model for CSEM simulations of the postsalt turbiditic reservoirs at the Brazilian offshore margin. We have developed a 3D CSEM finite-difference time-domain forward study to generate the full-azimuth CSEM data set for the MR3D earth model. To that end, we fabricated a full-azimuth survey with 45 towlines striking the north–south and east–west directions over a total of 500 receivers evenly spaced at 1 km intervals along the rugged seafloor of the MR3D model. To correctly represent the thin, disconnected, and complex geometries of the studied reservoirs, we have built a finely discretized mesh of [Formula: see text] cells leading to a large mesh with a total of approximately 90 million cells. We computed the six electromagnetic field components (Ex, Ey, Ez, Hx, Hy, and Hz) at six frequencies in the range of 0.125–1.25 Hz. In our efforts to mimic noise in real CSEM data, we summed to the data a multiplicative noise with a 1% standard deviation. Both CSEM data sets (noise free and noise added), with inline and broadside geometries, are distributed for research or commercial use, under the Creative Common License, at the Zenodo platform.


2020 ◽  
Vol 223 (3) ◽  
pp. 1565-1583
Author(s):  
Hoël Seillé ◽  
Gerhard Visser

SUMMARY Bayesian inversion of magnetotelluric (MT) data is a powerful but computationally expensive approach to estimate the subsurface electrical conductivity distribution and associated uncertainty. Approximating the Earth subsurface with 1-D physics considerably speeds-up calculation of the forward problem, making the Bayesian approach tractable, but can lead to biased results when the assumption is violated. We propose a methodology to quantitatively compensate for the bias caused by the 1-D Earth assumption within a 1-D trans-dimensional Markov chain Monte Carlo sampler. Our approach determines site-specific likelihood functions which are calculated using a dimensionality discrepancy error model derived by a machine learning algorithm trained on a set of synthetic 3-D conductivity training images. This is achieved by exploiting known geometrical dimensional properties of the MT phase tensor. A complex synthetic model which mimics a sedimentary basin environment is used to illustrate the ability of our workflow to reliably estimate uncertainty in the inversion results, even in presence of strong 2-D and 3-D effects. Using this dimensionality discrepancy error model we demonstrate that on this synthetic data set the use of our workflow performs better in 80 per cent of the cases compared to the existing practice of using constant errors. Finally, our workflow is benchmarked against real data acquired in Queensland, Australia, and shows its ability to detect the depth to basement accurately.


2014 ◽  
Vol 18 (12) ◽  
pp. 5219-5237 ◽  
Author(s):  
S. Ferrant ◽  
S. Gascoin ◽  
A. Veloso ◽  
J. Salmon-Monviola ◽  
M. Claverie ◽  
...  

Abstract. The growing availability of high-resolution satellite image series offers new opportunities in agro-hydrological research and modeling. We investigated the possibilities offered for improving crop-growth dynamic simulation with the distributed agro-hydrological model: topography-based nitrogen transfer and transformation (TNT2). We used a leaf area index (LAI) map series derived from 105 Formosat-2 (F2) images covering the period 2006–2010. The TNT2 model (Beaujouan et al., 2002), calibrated against discharge and in-stream nitrate fluxes for the period 1985–2001, was tested on the 2005–2010 data set (climate, land use, agricultural practices, and discharge and nitrate fluxes at the outlet). Data from the first year (2005) were used to initialize the hydrological model. A priori agricultural practices obtained from an extensive field survey, such as seeding date, crop cultivar, and amount of fertilizer, were used as input variables. Continuous values of LAI as a function of cumulative daily temperature were obtained at the crop-field level by fitting a double logistic equation against discrete satellite-derived LAI. Model predictions of LAI dynamics using the a priori input parameters displayed temporal shifts from those observed LAI profiles that are irregularly distributed in space (between field crops) and time (between years). By resetting the seeding date at the crop-field level, we have developed an optimization method designed to efficiently minimize this temporal shift and better fit the crop growth against both the spatial observations and crop production. This optimization of simulated LAI has a negligible impact on water budgets at the catchment scale (1 mm yr−1 on average) but a noticeable impact on in-stream nitrogen fluxes (around 12%), which is of interest when considering nitrate stream contamination issues and the objectives of TNT2 modeling. This study demonstrates the potential contribution of the forthcoming high spatial and temporal resolution products from the Sentinel-2 satellite mission for improving agro-hydrological modeling by constraining the spatial representation of crop productivity.


2019 ◽  
Vol 11 (3) ◽  
pp. 249 ◽  
Author(s):  
Pejman Rasti ◽  
Ali Ahmad ◽  
Salma Samiei ◽  
Etienne Belin ◽  
David Rousseau

In this article, we assess the interest of the recently introduced multiscale scattering transform for texture classification applied for the first time in plant science. Scattering transform is shown to outperform monoscale approaches (gray-level co-occurrence matrix, local binary patterns) but also multiscale approaches (wavelet decomposition) which do not include combinatory steps. The regime in which scatter transform also outperforms a standard CNN architecture in terms of data-set size is evaluated ( 10 4 instances). An approach on how to optimally design the scatter transform based on energy contrast is provided. This is illustrated on the hard and open problem of weed detection in culture crops of high density from the top view in intensity images. An annotated synthetic data-set available under the form of a data challenge and a simulator are proposed for reproducible science (https://uabox.univ-angers.fr/index.php/s/iuj0knyzOUgsUV9). Scatter transform only trained on synthetic data shows an accuracy of 85 % when tested on real data.


2020 ◽  
Author(s):  
Nicola Zoppetti ◽  
Simone Ceccherini ◽  
Flavio Barbara ◽  
Samuele Del Bianco ◽  
Marco Gai ◽  
...  

<p>Remote sounding of atmospheric composition makes use of satellite measurements with very heterogeneous characteristics. In particular, the determination of vertical profiles of gases in the atmosphere can be performed using measurements acquired in different spectral bands and with different observation geometries. The most rigorous way to combine heterogeneous measurements of the same quantity in a single Level 2 (L2) product is simultaneous retrieval. The main drawback of simultaneous retrieval is its complexity, due to the necessity to embed the forward models of different instruments into the same retrieval application. To overcome this shortcoming, we developed a data fusion method, referred to as Complete Data Fusion (CDF), to provide an efficient and adaptable alternative to simultaneous retrieval. In general, the CDF input is any number of profiles retrieved with the optimal estimation technique, characterized by their a priori information, covariance matrix (CM), and averaging kernel (AK) matrix. The output of the CDF is a single product also characterized by an a priori, a CM and an AK matrix, which collect all the available information content. To account for the geo-temporal differences and different vertical grids of the fusing profiles, a coincidence and an interpolation error have to be included in the error budget.<br>In the first part of the work, the CDF method is applied to ozone profiles simulated in the thermal infrared and ultraviolet bands, according to the specifications of the Sentinel 4 (geostationary) and Sentinel 5 (low Earth orbit) missions of the Copernicus program. The simulated data have been produced in the context of the Advanced Ultraviolet Radiation and Ozone Retrieval for Applications (AURORA) project funded by the European Commission in the framework of the Horizon 2020 program. The use of synthetic data and the assumption of negligible systematic error in the simulated measurements allow studying the behavior of the CDF in ideal conditions. The use of synthetic data allows evaluating the performance of the algorithm also in terms of differences between the products of interest and the reference truth, represented by the atmospheric scenario used in the procedure to simulate the L2 products. This analysis aims at demonstrating the potential benefits of the CDF for the synergy of products measured by different platforms in a close future realistic scenario, when the Sentinel 4, 5/5p ozone profiles will be available.<br>In the second part of this work, the CDF is applied to a set of real measurements of ozone acquired by GOME-2 onboard the MetOp-B platform. The quality of the CDF products, obtained for the first time from operational products, is compared with that of the original GOME-2 products. This aims to demonstrate the concrete applicability of the CDF to real data and its possible use to generate Level-3 (or higher) gridded products.<br>The results discussed in this presentation offer a first consolidated picture of the actual and potential value of an innovative technique for post-retrieval processing and generation of Level-3 (or higher) products from the atmospheric Sentinel data.</p>


2020 ◽  
Author(s):  
Linus Shihora ◽  
Henryk Dobslaw

<p>The Atmosphere and Ocean De-Aliasing Level-1B (AOD1B) product provides a priori information about temporal variations in the Earth's gravity field caused by global mass variability in the atmosphere and ocean and is routinely used as background model in satellite gravimetry. The current version 06 provides Stokes coefficients expanded up to d/o 180 every 3 hours. It is based on ERA-Interim and the ECMWF operational model for the atmosphere, and simulations with the global ocean general circulation model MPIOM consistently forced with the fields from the same atmospheric data-set.</p> <p>We here present preliminary numerical experiments in the development towards a new release 07 of AOD1B. The experiments are performed with the TP10 configuration of MPIOM and include (I) new hourly atmospheric forcing based on the new ERA-5 reanalysis from ECMWF; (II) an improved bathymetry around Antarctica including cavities under the ice shelves; and (III) an explicit implementation of the feedback effects of self-attraction and loading to ocean dynamics. The simulated ocean bottom pressure variability is discussed with respect to AOD1B version 6 as well as in situ ocean observations. A preliminary timeseries of hourly AOD1B-like coefficients for the year 2019 that incorporate the above mentioned improvements will be made available for testing purposes.</p>


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. WB19-WB35 ◽  
Author(s):  
Cyril Schamper ◽  
Fayçal Rejiba ◽  
Roger Guérin

Electromagnetic induction (EMI) methods are widely used to determine the distribution of the electrical conductivity and are well adapted to the delimitation of aquifers and clayey layers because the electromagnetic field is strongly perturbed by conductive media. The multicomponent EMI device that was used allowed the three components of the secondary magnetic field (the radial [Formula: see text], the tangential [Formula: see text], and the vertical [Formula: see text]) to be measured at 10 frequencies ranging from 110 to 56 kHz in one single sounding with offsets ranging from 20 to 400 m. In a continuing endeavor to improve the reliability with which the thickness and conductivity are inverted, we focused our research on the use of components other than the vertical magnetic field Hz. Because a separate sensitivity analysis of [Formula: see text] and [Formula: see text] suggests that [Formula: see text] is more sensitive to variations in the thickness of a near-surface conductive layer, we developed an inversion tool able to make single-sounding and laterally constrained 1D interpretation of both components jointly, associated with an adapted random search algorithm for single-sounding processing for which almost no a priori information is available. Considering the complementarity of [Formula: see text] and [Formula: see text] components, inversion tests of clean and noisy synthetic data showed an improvement in the definition of the thickness of a near-surface conductive layer. This inversion code was applied to the karst site of the basin of Fontaine-Sous-Préaux, near Rouen (northwest of France). Comparison with an electrical resistivity tomography tends to confirm the reliability of the interpretation from the EMI data with the developed inversion tool.


Sign in / Sign up

Export Citation Format

Share Document