Nested anisotropic geostatistical gridding of airborne geophysical data

Geophysics ◽  
2021 ◽  
pp. 1-56
Author(s):  
Aaron Davis

Airborne geophysical surveys routinely collect data along traverse lines at sample spacing distances that are two or more orders of magnitude less than between line separations. Grids and maps interpolated from such surveys can produce aliasing; features that cross flight lines can exhibit boudinage or string-of-beads artefacts. Boudinage effects can be addressed by novel gridding methods. Following developments in geostatistics, a non-stationary nested anisotropic gridding scheme is proposed that accommodates local anisotropy in survey data. Computation is reduced by including anchor points throughout the interpolation region that contain localised anisotropy information which is propagated throughout the survey area with a smoothing kernel. Additional anisotropy can be required at certain locations in the region to be gridded. A model selection scheme is proposed that employs Laplace approximations for determining whether increased model complexity is supported by the surrounding data. The efficacy of the method is shown using a synthetic data set obtained from satellite imagery. A pseudo geophysical survey is created from the image and reconstructed with the method above. Two case histories are selected for further elucidation from airborne geophysical surveys conducted in Western Australia. The first example illustrates improvement in gridding the depth of palaeochannels interpreted from along-line conductivity-depth models of a regional airborne electromagnetic survey in the Mid-West. The second example shows how improvements can be made in producing grids of aeromagnetic data and inverted electrical conductivity from an airborne electromagnetic survey conducted in the Pilbara. In both case histories, nested anisotropic kriging reduces the expression of boudinage patterns and sharpens cross-line features in the final gridded products permitting increased confidence in interpretations based on such products.

Geophysics ◽  
2015 ◽  
Vol 80 (1) ◽  
pp. E11-E21 ◽  
Author(s):  
Julien Guillemoteau ◽  
Pascal Sailhac ◽  
Charles Boulanger ◽  
Jérémie Trules

Ground loop-loop electromagnetic surveys are often conducted to fulfill the low-induction-number condition. To image the distribution of electric conductivity inside the ground, it is then necessary to collect a multioffset data set. We considered that less time-consuming constant offset measurements can also reach this objective. This can be achieved by performing multifrequency soundings, which are commonly performed for the airborne electromagnetic method. Ground multifrequency soundings have to be interpreted carefully because they contain high-induction-number data. These data are interpreted in two steps. First, the in-phase and out-of-phase data are converted into robust apparent conductivities valid for all the induction numbers. Second, the apparent conductivity data are inverted in 1D and 2D to obtain the true distribution of the ground conductivity. For the inversion, we used a general half-space Jacobian for the apparent conductivity valid for all the induction numbers. This method was applied and validated on synthetic data computed with the full Maxwell theory. The method was then applied on field data acquired in the test site of Provins, in the Parisian basin, France. The result revealed good agreement with borehole and geologic information, demonstrating the applicability of our method.


2006 ◽  
Vol 53 (1) ◽  
pp. 85-92 ◽  
Author(s):  
O. Bernard ◽  
B. Chachuat ◽  
A. Hélias ◽  
J. Rodriguez

In this paper we propose a methodology to determine the structure of the pseudo-stoichiometric coefficient matrix K in a mass balance based model, i.e. the maximal number of biomasses that must be taken into account to reproduce an available data set. It consists in estimating the number of reactions that must be taken into account to represent the main mass transfer within the bioreactor. This provides the dimension of K. The method is applied to data from an anaerobic digestion process and shows that even a model including a single biomass is sufficient. Then we apply the same method to the “synthetic data” issued from the complex ADM1 model, showing that the main model features can be obtained with two biomasses.


Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. G301-G312 ◽  
Author(s):  
Ross Brodie ◽  
Malcolm Sambridge

We have developed a holistic method for simultaneously calibrating, processing, and inverting frequency-domain airborne electromagnetic data. A spline-based, 3D, layered conductivity model covering the complete survey area was recovered through inversion of the entire raw airborne data set and available independent conductivity and interface-depth data. The holistic inversion formulation includes a mathematical model to account for systematic calibration errors such as incorrect gain and zero-level drift. By taking these elements into account in the inversion, the need to preprocess the airborne data prior to inversion is eliminated. Conventional processing schemes involve the sequential application of a number of calibration corrections, with data from each frequency treated separately. This is followed by inversion of each multifrequency sample in isolation from other samples.By simultaneously considering all of the available information in a holistic inversion, we are able to exploit interfrequency and spatial-coherency characteristics of the data. The formulation ensures that the conductivity and calibration models are optimal with respect to the airborne data and prior information. Introduction of interfrequency inconsistency and multistage error propagation stemming from the sequential nature of conventional processing schemes is also avoided. We confirm that accurate conductivity and calibration parameter values are recovered from holistic inversion of synthetic data sets. We demonstrate that the results from holistic inversion of raw survey data are superior to the output of conventional 1D inversion of final processed data. In addition to the technical benefits, we expect that holistic inversion will reduce costs by avoiding the expensive calibration-processing-recalibration paradigm. Furthermore, savings may also be made because specific high-altitude zero-level observations, needed for conventional processing, may not be required.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


Author(s):  
M G Persova ◽  
Y G Soloveichik ◽  
D V Vagin ◽  
D S Kiselev ◽  
O S Trubacheva ◽  
...  

Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Sign in / Sign up

Export Citation Format

Share Document