Cooperative constrained inversion of multiple electromagnetic data sets

Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. B173-B185 ◽  
Author(s):  
Michael S. McMillan ◽  
Douglas W. Oldenburg

We evaluated a method for cooperatively inverting multiple electromagnetic (EM) data sets with bound constraints to produce a consistent 3D resistivity model with improved resolution. Field data from the Antonio gold deposit in Peru and synthetic data were used to demonstrate this technique. We first separately inverted field airborne time-domain EM (AEM), controlled-source audio-frequency magnetotellurics (CSAMT), and direct current resistivity measurements. Each individual inversion recovered a resistor related to gold-hosted silica alteration within a relatively conductive background. The outline of the resistor in each inversion was in reasonable agreement with the mapped extent of known near-surface silica alteration. Variations between resistor recoveries in each 3D inversion model motivated a subsequent cooperative method, in which AEM data were inverted sequentially with a combined CSAMT and DC data set. This cooperative approach was first applied to a synthetic inversion over an Antonio-like simulated resistivity model, and the inversion result was both qualitatively and quantitatively closer to the true synthetic model compared to individual inversions. Using the same cooperative method, field data were inverted to produce a model that defined the target resistor while agreeing with all data sets. To test the benefit of borehole constraints, synthetic boreholes were added to the inversion as upper and lower bounds at locations of existing boreholes. The ensuing cooperative constrained synthetic inversion model had the closest match to the true simulated resistivity distribution. Bound constraints from field boreholes were then calculated by a regression relationship among the total sulfur content, alteration type, and resistivity measurements from rock samples and incorporated into the inversion. The resulting cooperative constrained field inversion model clearly imaged the resistive silica zone, extended the area of interpreted alteration, and also highlighted conductive zones within the resistive region potentially linked to sulfide and gold mineralization.

Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. S197-S205 ◽  
Author(s):  
Zhaolun Liu ◽  
Abdullah AlTheyab ◽  
Sherif M. Hanafy ◽  
Gerard Schuster

We have developed a methodology for detecting the presence of near-surface heterogeneities by naturally migrating backscattered surface waves in controlled-source data. The near-surface heterogeneities must be located within a depth of approximately one-third the dominant wavelength [Formula: see text] of the strong surface-wave arrivals. This natural migration method does not require knowledge of the near-surface phase-velocity distribution because it uses the recorded data to approximate the Green’s functions for migration. Prior to migration, the backscattered data are separated from the original records, and the band-passed filtered data are migrated to give an estimate of the migration image at a depth of approximately one-third [Formula: see text]. Each band-passed data set gives a migration image at a different depth. Results with synthetic data and field data recorded over known faults validate the effectiveness of this method. Migrating the surface waves in recorded 2D and 3D data sets accurately reveals the locations of known faults. The limitation of this method is that it requires a dense array of receivers with a geophone interval less than approximately one-half [Formula: see text].


Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


2019 ◽  
Vol 219 (3) ◽  
pp. 1773-1785 ◽  
Author(s):  
Julien Guillemoteau ◽  
François-Xavier Simon ◽  
Guillaume Hulin ◽  
Bertrand Dousteyssier ◽  
Marion Dacko ◽  
...  

SUMMARY The in-phase response collected by portable loop–loop electromagnetic induction (EMI) sensors operating at low and moderate induction numbers (≤1) is typically used for sensing the magnetic permeability (or susceptibility) of the subsurface. This is due to the fact that the in-phase response contains a small induction fraction and a preponderant induced magnetization fraction. The magnetization fraction follows the magneto-static equations similarly to the magnetic method but with an active magnetic source. The use of an active source offers the possibility to collect data with several loop–loop configurations, which illuminate the subsurface with different sensitivity patterns. Such multiconfiguration soundings thereby allows the imaging of subsurface magnetic permeability/susceptibility variations through an inversion procedure. This method is not affected by the remnant magnetization and theoretically overcomes the classical depth ambiguity generally encountered with passive geomagnetic data. To invert multiconfiguration in-phase data sets, we propose a novel methodology based on a full-grid 3-D multichannel deconvolution (MCD) procedure. This method allows us to invert large data sets (e.g. consisting of more than a hundred thousand of data points) for a dense voxel-based 3-D model of magnetic susceptibility subject to smoothness constraints. In this study, we first present and discuss synthetic examples of our imaging procedure, which aim at simulating realistic conditions. Finally, we demonstrate the applicability of our method to field data collected across an archaeological site in Auvergne (France) to image the foundations of a Gallo-Roman villa built with basalt rock material. Our synthetic and field data examples demonstrate the potential of the proposed inversion procedure offering new and complementary ways to interpret data sets collected with modern EMI instruments.


2016 ◽  
Vol 16 (11) ◽  
pp. 6977-6995 ◽  
Author(s):  
Jean-Pierre Chaboureau ◽  
Cyrille Flamant ◽  
Thibaut Dauhut ◽  
Cécile Kocha ◽  
Jean-Philippe Lafore ◽  
...  

Abstract. In the framework of the Fennec international programme, a field campaign was conducted in June 2011 over the western Sahara. It led to the first observational data set ever obtained that documents the dynamics, thermodynamics and composition of the Saharan atmospheric boundary layer (SABL) under the influence of the heat low. In support to the aircraft operation, four dust forecasts were run daily at low and high resolutions with convection-parameterizing and convection-permitting models, respectively. The unique airborne and ground-based data sets allowed the first ever intercomparison of dust forecasts over the western Sahara. At monthly scale, large aerosol optical depths (AODs) were forecast over the Sahara, a feature observed by satellite retrievals but with different magnitudes. The AOD intensity was correctly predicted by the high-resolution models, while it was underestimated by the low-resolution models. This was partly because of the generation of strong near-surface wind associated with thunderstorm-related density currents that could only be reproduced by models representing convection explicitly. Such models yield emissions mainly in the afternoon that dominate the total emission over the western fringes of the Adrar des Iforas and the Aïr Mountains in the high-resolution forecasts. Over the western Sahara, where the harmattan contributes up to 80 % of dust emission, all the models were successful in forecasting the deep well-mixed SABL. Some of them, however, missed the large near-surface dust concentration generated by density currents and low-level winds. This feature, observed repeatedly by the airborne lidar, was partly forecast by one high-resolution model only.


Geophysics ◽  
2020 ◽  
pp. 1-41 ◽  
Author(s):  
Jens Tronicke ◽  
Niklas Allroggen ◽  
Felix Biermann ◽  
Florian Fanselow ◽  
Julien Guillemoteau ◽  
...  

In near-surface geophysics, ground-based mapping surveys are routinely employed in a variety of applications including those from archaeology, civil engineering, hydrology, and soil science. The resulting geophysical anomaly maps of, for example, magnetic or electrical parameters are usually interpreted to laterally delineate subsurface structures such as those related to the remains of past human activities, subsurface utilities and other installations, hydrological properties, or different soil types. To ease the interpretation of such data sets, we propose a multi-scale processing, analysis, and visualization strategy. Our approach relies on a discrete redundant wavelet transform (RWT) implemented using cubic-spline filters and the à trous algorithm, which allows to efficiently compute a multi-scale decomposition of 2D data using a series of 1D convolutions. The basic idea of the approach is presented using a synthetic test image, while our archaeo-geophysical case study from North-East Germany demonstrates its potential to analyze and process rather typical geophysical anomaly maps including magnetic and topographic data. Our vertical-gradient magnetic data show amplitude variations over several orders of magnitude, complex anomaly patterns at various spatial scales, and typical noise patterns, while our topographic data show a distinct hill structure superimposed by a microtopographic stripe pattern and random noise. Our results demonstrate that the RWT approach is capable to successfully separate these components and that selected wavelet planes can be scaled and combined so that the reconstructed images allow for a detailed, multi-scale structural interpretation also using integrated visualizations of magnetic and topographic data. Because our analysis approach is straightforward to implement without laborious parameter testing and tuning, computationally efficient, and easily adaptable to other geophysical data sets, we believe that it can help to rapidly analyze and interpret different geophysical mapping data collected to address a variety of near-surface applications from engineering practice and research.


2015 ◽  
Vol 8 (8) ◽  
pp. 2645-2653 ◽  
Author(s):  
C. G. Nunalee ◽  
Á. Horváth ◽  
S. Basu

Abstract. Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP) modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL), is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF) model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30), the Shuttle Radar Topography Mission (SRTM), and the Global Multi-resolution Terrain Elevation Data set (GMTED2010) terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. V213-V225 ◽  
Author(s):  
Shaohuan Zu ◽  
Hui Zhou ◽  
Yangkang Chen ◽  
Shan Qu ◽  
Xiaofeng Zou ◽  
...  

We have designed a periodically varying code that can avoid the problem of the local coherency and make the interference distribute uniformly in a given range; hence, it was better at suppressing incoherent interference (blending noise) and preserving coherent useful signals compared with a random dithering code. We have also devised a new form of the iterative method to remove interference generated from the simultaneous source acquisition. In each iteration, we have estimated the interference using the blending operator following the proposed formula and then subtracted the interference from the pseudodeblended data. To further eliminate the incoherent interference and constrain the inversion, the data were then transformed to an auxiliary sparse domain for applying a thresholding operator. During the iterations, the threshold was decreased from the largest value to zero following an exponential function. The exponentially decreasing threshold aimed to gradually pass the deblended data to a more acceptable model subspace. Two numerically blended synthetic data sets and one numerically blended practical field data set from an ocean bottom cable were used to demonstrate the usefulness of our proposed method and the better performance of the periodically varying code over the traditional random dithering code.


2020 ◽  
Vol 39 (5) ◽  
pp. 324-331
Author(s):  
Gary Murphy ◽  
Vanessa Brown ◽  
Denes Vigh

As part of a wide-reaching full-waveform inversion (FWI) research program, FWI is applied to an onshore seismic data set collected in the Delaware Basin, west Texas. FWI is routinely applied on typical marine data sets with high signal-to-noise ratio (S/N), relatively good low-frequency content, and reasonably long offsets. Land seismic data sets, in comparison, present significant challenges for FWI due to low S/N, a dearth of low frequencies, and limited offsets. Recent advancements in FWI overcome limitations due to poor S/N and low frequencies making land FWI feasible to use to update the shallow velocities. The chosen area has contrasting and variable near-surface conditions providing an excellent test data set on which to demonstrate the workflow and its challenges. An acoustic FWI workflow is used to update the near-surface velocity model in order to improve the deeper image and simultaneously help highlight potential shallow drilling hazards.


Geophysics ◽  
2020 ◽  
Vol 85 (4) ◽  
pp. D133-D143
Author(s):  
David Li ◽  
Xiao Tian ◽  
Hao Hu ◽  
Xiao-Ming Tang ◽  
Xinding Fang ◽  
...  

The ability to image near-wellbore fractures is critical for wellbore integrity monitoring as well as for energy production and waste disposal. Single-well imaging uses a sonic logging instrument consisting of a source and a receiver array to image geologic structures around a wellbore. We use cross-dipole sources because they can excite waves that can be used to image structures farther away from the wellbore than traditional monopole sources. However, the cross-dipole source also will excite large-amplitude, slowly propagating dispersive waves along the surface of the borehole. These waves will interfere with the formation reflection events. We have adopted a new fracture imaging procedure using sonic data. We first remove the strong amplitude borehole waves using a new nonlinear signal comparison method. We then apply Gaussian beam migration to obtain high-resolution images of the fractures. To verify our method, we first test our method on synthetic data sets modeled using a finite-difference approach. We then validate our method on a field data set collected from a fractured natural gas production well. We are able to obtain high-quality images of the fractures using Gaussian beam migration compared with Kirchhoff migration for the synthetic and field data sets. We also found that a low-frequency source (around 1 kHz) is needed to obtain a sharp image of the fracture because high-frequency wavefields can interact strongly with the fluid-filled borehole.


Geophysics ◽  
2014 ◽  
Vol 79 (6) ◽  
pp. B243-B252 ◽  
Author(s):  
Peter Bergmann ◽  
Artem Kashubin ◽  
Monika Ivandic ◽  
Stefan Lüth ◽  
Christopher Juhlin

A method for static correction of time-lapse differences in reflection arrival times of time-lapse prestack seismic data is presented. These arrival-time differences are typically caused by changes in the near-surface velocities between the acquisitions and had a detrimental impact on time-lapse seismic imaging. Trace-to-trace time shifts of the data sets from different vintages are determined by crosscorrelations. The time shifts are decomposed in a surface-consistent manner, which yields static corrections that tie the repeat data to the baseline data. Hence, this approach implies that new refraction static corrections for the repeat data sets are unnecessary. The approach is demonstrated on a 4D seismic data set from the Ketzin [Formula: see text] pilot storage site, Germany, and is compared with the result of an initial processing that was based on separate refraction static corrections. It is shown that the time-lapse difference static correction approach reduces 4D noise more effectively than separate refraction static corrections and is significantly less labor intensive.


Sign in / Sign up

Export Citation Format

Share Document