Data error quantification in spectral induced polarization imaging

Geophysics ◽  
2012 ◽  
Vol 77 (3) ◽  
pp. E227-E237 ◽  
Author(s):  
Adrián Flores Orozco ◽  
Andreas Kemna ◽  
Egon Zimmermann

Induced polarization (IP) imaging is being increasingly used in near-surface geophysical studies, particularly for hydrogeologic and environmental applications. However, the analysis of IP data error has received little attention, even though the importance of an adequate error parameterization has been demonstrated for electrical resistivity imaging. Based on the analysis of data sets measured in the frequency range from 1 Hz to 1 kHz, we proposed a model for the quantification of phase data errors in IP measurements. The analyzed data sets were collected on an experimental tank containing targets of different polarizability. Our study is based on the common practice that the discrepancy of measurements taken in normal and reciprocal configuration can be considered as a measure of data error. Statistical analysis of the discrepancies between normal and reciprocal measurements revealed that the phase error decreases with increasing resistance (i.e., signal strength). We proposed an inverse power-law model to quantify the phase error as a function of the measured resistances. We found that the adequate implementation of the proposed error model in an inversion scheme leads to improved IP imaging results in laboratory experiments. Application to a data set collected at the field-scale also demonstrated the superiority of the new model over previous assumptions.

Geophysics ◽  
2018 ◽  
Vol 83 (2) ◽  
pp. E75-E86 ◽  
Author(s):  
Adrian Flores Orozco ◽  
Jakob Gallistl ◽  
Matthias Bücker ◽  
Kenneth H. Williams

In recent years, the time-domain induced polarization (TDIP) imaging technique has emerged as a suitable method for the characterization and the monitoring of hydrogeologic and biogeochemical processes. However, one of the major challenges refers to the resolution of the electrical images. Hence, various studies have stressed the importance of data processing, error characterization, and the deployment of adequate inversion schemes. A widely accepted method to assess data error in electrical imaging relies on the analysis of the discrepancy between normal and reciprocal measurements. Nevertheless, the collection of reciprocals doubles the acquisition time and is only viable for a limited subset of commonly used electrode configurations (e.g., dipole-dipole [DD]). To overcome these limitations, we have developed a new methodology to quantify the data error in TDIP imaging, which is entirely based on the analysis of the recorded IP decay curve and does not require recollection of data (e.g., reciprocals). The first two steps of the methodology assess the general characteristics of the decay curves and the spatial consistency of the measurements for the detection and removal of outliers. In the third and fourth steps, we quantify the deviation of the measured decay curves from a smooth model for the estimation of random error of the total chargeability and transfer resistance measurement. The error models and imaging results obtained from this methodology — in the following referred to as “decay curve analysis” — are compared with those obtained following a conventional normal-reciprocal analysis revealing consistent results. We determine the applicability of our methodology with real field data collected at the floodplain scale (approximately 12 ha) using multiple gradient and DD configurations.


Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. Q27-Q37
Author(s):  
Yang Shen ◽  
Jie Zhang

Refraction methods are often applied to model and image near-surface velocity structures. However, near-surface imaging is very challenging, and no single method can resolve all of the land seismic problems across the world. In addition, deep interfaces are difficult to image from land reflection data due to the associated low signal-to-noise ratio. Following previous research, we have developed a refraction wavefield migration method for imaging shallow and deep interfaces via interferometry. Our method includes two steps: converting refractions into virtual reflection gathers and then applying a prestack depth migration method to produce interface images from the virtual reflection gathers. With a regular recording offset of approximately 3 km, this approach produces an image of a shallow interface within the top 1 km. If the recording offset is very long, the refractions may follow a deep path, and the result may reveal a deep interface. We determine several factors that affect the imaging results using synthetics. We also apply the novel method to one data set with regular recording offsets and another with far offsets; both cases produce sharp images, which are further verified by conventional reflection imaging. This method can be applied as a promising imaging tool when handling practical cases involving data with excessively weak or missing reflections but available refractions.


2016 ◽  
Vol 16 (11) ◽  
pp. 6977-6995 ◽  
Author(s):  
Jean-Pierre Chaboureau ◽  
Cyrille Flamant ◽  
Thibaut Dauhut ◽  
Cécile Kocha ◽  
Jean-Philippe Lafore ◽  
...  

Abstract. In the framework of the Fennec international programme, a field campaign was conducted in June 2011 over the western Sahara. It led to the first observational data set ever obtained that documents the dynamics, thermodynamics and composition of the Saharan atmospheric boundary layer (SABL) under the influence of the heat low. In support to the aircraft operation, four dust forecasts were run daily at low and high resolutions with convection-parameterizing and convection-permitting models, respectively. The unique airborne and ground-based data sets allowed the first ever intercomparison of dust forecasts over the western Sahara. At monthly scale, large aerosol optical depths (AODs) were forecast over the Sahara, a feature observed by satellite retrievals but with different magnitudes. The AOD intensity was correctly predicted by the high-resolution models, while it was underestimated by the low-resolution models. This was partly because of the generation of strong near-surface wind associated with thunderstorm-related density currents that could only be reproduced by models representing convection explicitly. Such models yield emissions mainly in the afternoon that dominate the total emission over the western fringes of the Adrar des Iforas and the Aïr Mountains in the high-resolution forecasts. Over the western Sahara, where the harmattan contributes up to 80 % of dust emission, all the models were successful in forecasting the deep well-mixed SABL. Some of them, however, missed the large near-surface dust concentration generated by density currents and low-level winds. This feature, observed repeatedly by the airborne lidar, was partly forecast by one high-resolution model only.


Geophysics ◽  
2020 ◽  
pp. 1-41 ◽  
Author(s):  
Jens Tronicke ◽  
Niklas Allroggen ◽  
Felix Biermann ◽  
Florian Fanselow ◽  
Julien Guillemoteau ◽  
...  

In near-surface geophysics, ground-based mapping surveys are routinely employed in a variety of applications including those from archaeology, civil engineering, hydrology, and soil science. The resulting geophysical anomaly maps of, for example, magnetic or electrical parameters are usually interpreted to laterally delineate subsurface structures such as those related to the remains of past human activities, subsurface utilities and other installations, hydrological properties, or different soil types. To ease the interpretation of such data sets, we propose a multi-scale processing, analysis, and visualization strategy. Our approach relies on a discrete redundant wavelet transform (RWT) implemented using cubic-spline filters and the à trous algorithm, which allows to efficiently compute a multi-scale decomposition of 2D data using a series of 1D convolutions. The basic idea of the approach is presented using a synthetic test image, while our archaeo-geophysical case study from North-East Germany demonstrates its potential to analyze and process rather typical geophysical anomaly maps including magnetic and topographic data. Our vertical-gradient magnetic data show amplitude variations over several orders of magnitude, complex anomaly patterns at various spatial scales, and typical noise patterns, while our topographic data show a distinct hill structure superimposed by a microtopographic stripe pattern and random noise. Our results demonstrate that the RWT approach is capable to successfully separate these components and that selected wavelet planes can be scaled and combined so that the reconstructed images allow for a detailed, multi-scale structural interpretation also using integrated visualizations of magnetic and topographic data. Because our analysis approach is straightforward to implement without laborious parameter testing and tuning, computationally efficient, and easily adaptable to other geophysical data sets, we believe that it can help to rapidly analyze and interpret different geophysical mapping data collected to address a variety of near-surface applications from engineering practice and research.


2015 ◽  
Vol 8 (8) ◽  
pp. 2645-2653 ◽  
Author(s):  
C. G. Nunalee ◽  
Á. Horváth ◽  
S. Basu

Abstract. Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP) modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL), is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF) model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30), the Shuttle Radar Topography Mission (SRTM), and the Global Multi-resolution Terrain Elevation Data set (GMTED2010) terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.


2020 ◽  
Vol 39 (5) ◽  
pp. 324-331
Author(s):  
Gary Murphy ◽  
Vanessa Brown ◽  
Denes Vigh

As part of a wide-reaching full-waveform inversion (FWI) research program, FWI is applied to an onshore seismic data set collected in the Delaware Basin, west Texas. FWI is routinely applied on typical marine data sets with high signal-to-noise ratio (S/N), relatively good low-frequency content, and reasonably long offsets. Land seismic data sets, in comparison, present significant challenges for FWI due to low S/N, a dearth of low frequencies, and limited offsets. Recent advancements in FWI overcome limitations due to poor S/N and low frequencies making land FWI feasible to use to update the shallow velocities. The chosen area has contrasting and variable near-surface conditions providing an excellent test data set on which to demonstrate the workflow and its challenges. An acoustic FWI workflow is used to update the near-surface velocity model in order to improve the deeper image and simultaneously help highlight potential shallow drilling hazards.


Geophysics ◽  
2014 ◽  
Vol 79 (6) ◽  
pp. B243-B252 ◽  
Author(s):  
Peter Bergmann ◽  
Artem Kashubin ◽  
Monika Ivandic ◽  
Stefan Lüth ◽  
Christopher Juhlin

A method for static correction of time-lapse differences in reflection arrival times of time-lapse prestack seismic data is presented. These arrival-time differences are typically caused by changes in the near-surface velocities between the acquisitions and had a detrimental impact on time-lapse seismic imaging. Trace-to-trace time shifts of the data sets from different vintages are determined by crosscorrelations. The time shifts are decomposed in a surface-consistent manner, which yields static corrections that tie the repeat data to the baseline data. Hence, this approach implies that new refraction static corrections for the repeat data sets are unnecessary. The approach is demonstrated on a 4D seismic data set from the Ketzin [Formula: see text] pilot storage site, Germany, and is compared with the result of an initial processing that was based on separate refraction static corrections. It is shown that the time-lapse difference static correction approach reduces 4D noise more effectively than separate refraction static corrections and is significantly less labor intensive.


2021 ◽  
Author(s):  
Lukas Aigner ◽  
Timea Katona ◽  
Hadrien Michel ◽  
Arsalan Ahmed ◽  
Thomas Hermans ◽  
...  

<p>Detailed information on the clay content of the subsurface and its spatial distribution plays a critical role in the interaction between surface- and groundwater. In this study, we investigate a new methodology to integrate data measured with electromagnetic and electrical geophysical methods, namely, the transient electromagnetic (TEM) and the spectral induced polarization (SIP) to quantify subsurface clay content in an imaging framework. The methodology is tested in data sets collected at a quarry close to Vienna and consists of a ca. 10 m thick clay layer below a ca. 8 m thick overburden of sandy silts. Our data set includes SIP data collected along a 315 m long profile with an electrode separation of 5 m in a frequency range from 0.1 to 225 Hz. Along this profile, we measured 26 TEM soundings using a 12.5 m loop with 24 windows recording in a time range between 4 and 140 μs. Ground truth information corresponds to grain size analysis conducted in 25 soil samples collected in a depth from 5 to 28 m. SIP inversion results at a single frequency provided structural a-priori information to improve the inversion of the TEM data. The inverted TEM conductivity model, nearest to the position of soil sample collection, was correlated to the grain size distribution and the resulting positive exponential relationship was used to obtain vertical 1D variations of clay content with depth. All sounding positions were interpolated to obtain a 2D image of subsurface clay content. This clay content variations were then compared to images of the Cole-Cole parameters, describing the frequency dependence of SIP imaging results. To evaluate the uncertainty in our clay estimations, we applied the Bayesian evidential learning 1D imaging (BEL1D). We obtained uncertainties of layer thickness, resistivity, and clay content by integrating the clay-conductivity relationship derived from TEM data into the BEL1D framework.</p>


Geophysics ◽  
2016 ◽  
Vol 81 (4) ◽  
pp. U39-U49 ◽  
Author(s):  
Daniele Colombo ◽  
Federico Miorelli ◽  
Ernesto Sandoval ◽  
Kevin Erickson

Industry practices for near-surface analysis indicate difficulties in coping with the increased number of channels in seismic acquisition systems, and new approaches are needed to fully exploit the resolution embedded in modern seismic data sets. To achieve this goal, we have developed a novel surface-consistent refraction analysis method for low-relief geology to automatically derive near-surface corrections for seismic data processing. The method uses concepts from surface-consistent analysis applied to refracted arrivals. The key aspects of the method consist of the use of common midpoint (CMP)-offset-azimuth binning, evaluation of mean traveltime and standard deviation for each bin, rejection of anomalous first-break (FB) picks, derivation of CMP-based traveltime-offset functions, conversion to velocity-depth functions, evaluation of long-wavelength statics, and calculation of surface-consistent residual statics through waveform crosscorrelation. Residual time lags are evaluated in multiple CMP-offset-azimuth bins by crosscorrelating a pilot trace with all the other traces in the gather in which the correlation window is centered at the refracted arrival. The residuals are then used to build a system of linear equations that is simultaneously inverted for surface-consistent shot and receiver time shift corrections plus a possible subsurface residual term. All the steps are completely automated and require a fraction of the time needed for conventional near-surface analysis. The developed methodology was successfully performed on a complex 3D land data set from Central Saudi Arabia where it was benchmarked against a conventional tomographic work flow. The results indicate that the new surface-consistent refraction statics method enhances seismic imaging especially in portions of the survey dominated by noise.


Sign in / Sign up

Export Citation Format

Share Document