Inversion of ground constant offset loop-loop electromagnetic data for a large range of induction numbers

Geophysics ◽  
2015 ◽  
Vol 80 (1) ◽  
pp. E11-E21 ◽  
Author(s):  
Julien Guillemoteau ◽  
Pascal Sailhac ◽  
Charles Boulanger ◽  
Jérémie Trules

Ground loop-loop electromagnetic surveys are often conducted to fulfill the low-induction-number condition. To image the distribution of electric conductivity inside the ground, it is then necessary to collect a multioffset data set. We considered that less time-consuming constant offset measurements can also reach this objective. This can be achieved by performing multifrequency soundings, which are commonly performed for the airborne electromagnetic method. Ground multifrequency soundings have to be interpreted carefully because they contain high-induction-number data. These data are interpreted in two steps. First, the in-phase and out-of-phase data are converted into robust apparent conductivities valid for all the induction numbers. Second, the apparent conductivity data are inverted in 1D and 2D to obtain the true distribution of the ground conductivity. For the inversion, we used a general half-space Jacobian for the apparent conductivity valid for all the induction numbers. This method was applied and validated on synthetic data computed with the full Maxwell theory. The method was then applied on field data acquired in the test site of Provins, in the Parisian basin, France. The result revealed good agreement with borehole and geologic information, demonstrating the applicability of our method.

Geophysics ◽  
2021 ◽  
pp. 1-56
Author(s):  
Aaron Davis

Airborne geophysical surveys routinely collect data along traverse lines at sample spacing distances that are two or more orders of magnitude less than between line separations. Grids and maps interpolated from such surveys can produce aliasing; features that cross flight lines can exhibit boudinage or string-of-beads artefacts. Boudinage effects can be addressed by novel gridding methods. Following developments in geostatistics, a non-stationary nested anisotropic gridding scheme is proposed that accommodates local anisotropy in survey data. Computation is reduced by including anchor points throughout the interpolation region that contain localised anisotropy information which is propagated throughout the survey area with a smoothing kernel. Additional anisotropy can be required at certain locations in the region to be gridded. A model selection scheme is proposed that employs Laplace approximations for determining whether increased model complexity is supported by the surrounding data. The efficacy of the method is shown using a synthetic data set obtained from satellite imagery. A pseudo geophysical survey is created from the image and reconstructed with the method above. Two case histories are selected for further elucidation from airborne geophysical surveys conducted in Western Australia. The first example illustrates improvement in gridding the depth of palaeochannels interpreted from along-line conductivity-depth models of a regional airborne electromagnetic survey in the Mid-West. The second example shows how improvements can be made in producing grids of aeromagnetic data and inverted electrical conductivity from an airborne electromagnetic survey conducted in the Pilbara. In both case histories, nested anisotropic kriging reduces the expression of boudinage patterns and sharpens cross-line features in the final gridded products permitting increased confidence in interpretations based on such products.


Geophysics ◽  
1985 ◽  
Vol 50 (11) ◽  
pp. 1701-1720 ◽  
Author(s):  
Glyn M. Jones ◽  
D. B. Jovanovich

A new technique is presented for the inversion of head‐wave traveltimes to infer near‐surface structure. Traveltimes computed along intersecting pairs of refracted rays are used to reconstruct the shape of the first refracting horizon beneath the surface and variations in refractor velocity along this boundary. The information derived can be used as the basis for further processing, such as the calculation of near‐surface static delays. One advantage of the method is that the shape of the refractor is determined independently of the refractor velocity. With multifold coverage, rapid lateral changes in refractor geometry or velocity can be mapped. Two examples of the inversion technique are presented: one uses a synthetic data set; the other is drawn from field data shot over a deep graben filled with sediment. The results obtained using the synthetic data validate the method and support the conclusions of an error analysis, in which errors in the refractor velocity determined using receivers to the left and right of the shots are of opposite sign. The true refractor velocity therefore falls between the two sets of estimates. The refraction image obtained by inversion of the set of field data is in good agreement with a constant‐velocity reflection stack and illustrates that the ray inversion method can handle large lateral changes in refractor velocity or relief.


Geophysics ◽  
2005 ◽  
Vol 70 (4) ◽  
pp. G77-G85 ◽  
Author(s):  
Daniel Sattel

Zohdy's method for the inversion of dc-resistivity data has been adapted to the inversion of airborne electromagnetic (AEM) data. AEM responses are first transformed into apparent-conductivity depth profiles, followed by an iterative adjustment of layer thicknesses and interval conductivities. The start model, including the number of layers, is determined from the data. This approach optimizes model flexibility without the need for parameter regularization. Results from Zohdy's inversion applied to TEMPEST, GEOTEM, and [Formula: see text] data acquired in a range of conductivity scenarios including the Bull Creek prospect in Queensland, Australia; the Boteti area, Botswana; and the Reid-Mahaffy test site in Ontario, Canada, show well-delineated target zones. A comparison with Occam's inversion shows good agreement between the conductivity-depth models recovered by the two methods, with Zohdy's inversion being 25 to 80 times faster.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. WB59-WB69 ◽  
Author(s):  
Leif H. Cox ◽  
Glenn A. Wilson ◽  
Michael S. Zhdanov

Time-domain airborne surveys gather hundreds of thousands of multichannel, multicomponent samples. The volume of data and other complications have made 1D inversions and transforms the only viable method to interpret these data, in spite of their limitations. We have developed a practical methodology to perform full 3D inversions of entire time- or frequency-domain airborne electromagnetic (AEM) surveys. Our methodology is based on the concept of a moving footprint that reduces the computation requirements by several orders of magnitude. The 3D AEM responses and sensitivities are computed using a frequency-domain total field integral equation technique. For time-domain AEM responses and sensitivities, the frequency-domain responses and sensitivities are transformed to the time domain via a cosine transform and convolution with the system waveform. We demonstrate the efficiency of our methodology with a model study relevant to the Abitibi greenstone belt and a case study from the Reid-Mahaffy test site in Ontario, Canada, which provided an excellent practical opportunity to compare 3D inversions for different AEM systems. In particular, we compared 3D inversions of VTEM-35 (time-domain helicopter), MEGATEM II (time-domain fixed-wing), and DIGHEM (frequency-domain helicopter) data. Our comparison showed that each system is able to image the conductive overburden and to varying degrees, detect and delineate the bedrock conductors, and, as expected, that the DIGHEM system best resolved the conductive overburden, whereas the time-domain systems most clearly delineated the bedrock conductors. Our comparisons of the helicopter and fixed-wing time-domain systems revealed that the often-cited disadvantages of a fixed-wing system (i.e., response asymmetry) are not inherent in the system, but rather reflect a limitation of the 1D interpretation methods used to date.


2020 ◽  
Vol 66 (257) ◽  
pp. 373-385
Author(s):  
María Belén Heredia ◽  
Nicolas Eckert ◽  
Clémentine Prieur ◽  
Emmanuel Thibert

AbstractPhysically-based avalanche propagation models must still be locally calibrated to provide robust predictions, e.g. in long-term forecasting and subsequent risk assessment. Friction parameters cannot be measured directly and need to be estimated from observations. Rich and diverse data are now increasingly available from test-sites, but for measurements made along flow propagation, potential autocorrelation should be explicitly accounted for. To this aim, this work proposes a comprehensive Bayesian calibration and statistical model selection framework. As a proof of concept, the framework was applied to an avalanche sliding block model with the standard Voellmy friction law and high rate photogrammetric images. An avalanche released at the Lautaret test-site and a synthetic data set based on the avalanche are used to test the approach and to illustrate its benefits. Results demonstrate (1) the efficiency of the proposed calibration scheme, and (2) that including autocorrelation in the statistical modelling definitely improves the accuracy of both parameter estimation and velocity predictions. Our approach could be extended without loss of generality to the calibration of any avalanche dynamics model from any type of measurement stemming from the same avalanche flow.


Geophysics ◽  
2018 ◽  
Vol 83 (5) ◽  
pp. E357-E369 ◽  
Author(s):  
Kyubo Noh ◽  
Seokmin Oh ◽  
Soon Jee Seol ◽  
Joongmoo Byun

We have developed two inversion workflows that sequentially invert conductivity and susceptibility models from a frequency-domain controlled-source electromagnetic data set. Both workflows start with conductivity inversion using electromagnetic (EM) kernel and out-of-phase component data, which is mainly sensitive to conductivity, and then we adopt the susceptibility inversion using in-phase component data. The difference between these two workflows is in the susceptibility inversion algorithm: One uses an EM kernel and a conductivity model as the input model; the other uses a magnetostatic kernel and a conductivity model to generate the appropriate input data. Because the appropriate input data for magnetostatic inversion should not contain the EM induction effect, the in-phase induction effect is simulated through the conductivity model obtained by inverting out-of-phase data and subtracting them from observed in-phase data to generate an “induction-subtracted” in-phase data set that becomes input data for magnetostatic inversion. For magnetostatic inversion, we used a linear magnetostatic kernel to enable rapid computation. Then, we applied the two inversion workflows to a field data set of a DIGHEM survey, and we successfully reconstructed the conductivity and susceptibility models from each workflow using two zones within the data sets, in which conductive and susceptible anomalies were present. One important finding is that the susceptibility inversion results obtained from two different workflows are very similar to each other. However, computational time can be significantly saved with linear magnetostatic inversion. We found out how the results of the conductivity and susceptibility models could be well-imaged using a sequential inversion workflow and also how magnetostatic inversion could be used efficiently for airborne EM data inversion.


Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. G301-G312 ◽  
Author(s):  
Ross Brodie ◽  
Malcolm Sambridge

We have developed a holistic method for simultaneously calibrating, processing, and inverting frequency-domain airborne electromagnetic data. A spline-based, 3D, layered conductivity model covering the complete survey area was recovered through inversion of the entire raw airborne data set and available independent conductivity and interface-depth data. The holistic inversion formulation includes a mathematical model to account for systematic calibration errors such as incorrect gain and zero-level drift. By taking these elements into account in the inversion, the need to preprocess the airborne data prior to inversion is eliminated. Conventional processing schemes involve the sequential application of a number of calibration corrections, with data from each frequency treated separately. This is followed by inversion of each multifrequency sample in isolation from other samples.By simultaneously considering all of the available information in a holistic inversion, we are able to exploit interfrequency and spatial-coherency characteristics of the data. The formulation ensures that the conductivity and calibration models are optimal with respect to the airborne data and prior information. Introduction of interfrequency inconsistency and multistage error propagation stemming from the sequential nature of conventional processing schemes is also avoided. We confirm that accurate conductivity and calibration parameter values are recovered from holistic inversion of synthetic data sets. We demonstrate that the results from holistic inversion of raw survey data are superior to the output of conventional 1D inversion of final processed data. In addition to the technical benefits, we expect that holistic inversion will reduce costs by avoiding the expensive calibration-processing-recalibration paradigm. Furthermore, savings may also be made because specific high-altitude zero-level observations, needed for conventional processing, may not be required.


Geophysics ◽  
2016 ◽  
Vol 81 (5) ◽  
pp. E389-E400 ◽  
Author(s):  
Juerg Hauser ◽  
James Gunning ◽  
David Annetts

Probabilistic inversion of airborne electromagnetic data is often approximated by a layered earth using a computationally efficient 1D kernel. If the underlying framework accounts for prior beliefs on spatial correlation, the inversion will be able to recover spatially coherent interfaces and associated uncertainties. Greenfield exploration using airborne electromagnetic data, however, often seeks to identify discrete economical targets. In mature exploration provinces, such bodies are frequently obscured by thick, conductive regolith, and the response of such economic basement conductors presents a challenge to any layered earth inversion. A well-known computationally efficient way to approximate the response of a basement conductor is to use a thin plate. Here we have extended a Bayesian parametric bootstrap approach, so that the basement of a spatially varying layered earth can contain a thin plate. The resulting Bayesian framework allowed for the inversion of basement conductors and associated uncertainties, but more importantly, the use of model selection concepts to determine if the data supports a basement conductor model or not. Recovered maps of basement conductor probabilities show the expected patterns in uncertainty; for example, a decrease in target probability with increasing depth. Such maps of target probabilities generated using the thin plate approximation are a potentially valuable source of information for the planning of exploration activity, such as the targeting of drillholes to confirm the existence of a discrete conductor in a greenfield exploration scenario. We have used a field data set from northwest Queensland, Australia, to illustrate how the approach allowed inversion for a basement conductor and related uncertainties in a spatially variable layered earth, using the information from multiple survey lines and prior beliefs of geology.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


Sign in / Sign up

Export Citation Format

Share Document