Cooperative inversion of 2D geophysical data sets: A zonal approach based on fuzzy c-means cluster analysis

Geophysics ◽  
2007 ◽  
Vol 72 (3) ◽  
pp. A35-A39 ◽  
Author(s):  
Hendrik Paasche ◽  
Jens Tronicke

In many near-surface geophysical applications, it is now common practice to use multiple geophysical methods to explore subsurface structures and parameters. Such multimethod-based exploration strategies can significantly reduce uncertainties and ambiguities in geophysical data analysis and interpretation. We propose a novel 2D approach based on fuzzy [Formula: see text]-means cluster analysis for the cooperative inversion of disparate data sets. We show that this approach results in a single zonal model of subsurface structures in which each zone is characterized by a set of different parameters. This finding implies that no further structural interpretation of geophysical parameter fields is needed, which is a major advantage compared with conventional inversions that rely on a single input data set and cooperative inversion approaches.

Geophysics ◽  
2020 ◽  
pp. 1-41 ◽  
Author(s):  
Jens Tronicke ◽  
Niklas Allroggen ◽  
Felix Biermann ◽  
Florian Fanselow ◽  
Julien Guillemoteau ◽  
...  

In near-surface geophysics, ground-based mapping surveys are routinely employed in a variety of applications including those from archaeology, civil engineering, hydrology, and soil science. The resulting geophysical anomaly maps of, for example, magnetic or electrical parameters are usually interpreted to laterally delineate subsurface structures such as those related to the remains of past human activities, subsurface utilities and other installations, hydrological properties, or different soil types. To ease the interpretation of such data sets, we propose a multi-scale processing, analysis, and visualization strategy. Our approach relies on a discrete redundant wavelet transform (RWT) implemented using cubic-spline filters and the à trous algorithm, which allows to efficiently compute a multi-scale decomposition of 2D data using a series of 1D convolutions. The basic idea of the approach is presented using a synthetic test image, while our archaeo-geophysical case study from North-East Germany demonstrates its potential to analyze and process rather typical geophysical anomaly maps including magnetic and topographic data. Our vertical-gradient magnetic data show amplitude variations over several orders of magnitude, complex anomaly patterns at various spatial scales, and typical noise patterns, while our topographic data show a distinct hill structure superimposed by a microtopographic stripe pattern and random noise. Our results demonstrate that the RWT approach is capable to successfully separate these components and that selected wavelet planes can be scaled and combined so that the reconstructed images allow for a detailed, multi-scale structural interpretation also using integrated visualizations of magnetic and topographic data. Because our analysis approach is straightforward to implement without laborious parameter testing and tuning, computationally efficient, and easily adaptable to other geophysical data sets, we believe that it can help to rapidly analyze and interpret different geophysical mapping data collected to address a variety of near-surface applications from engineering practice and research.


2007 ◽  
Vol 56 (6) ◽  
pp. 75-83 ◽  
Author(s):  
X. Flores ◽  
J. Comas ◽  
I.R. Roda ◽  
L. Jiménez ◽  
K.V. Gernaey

The main objective of this paper is to present the application of selected multivariable statistical techniques in plant-wide wastewater treatment plant (WWTP) control strategies analysis. In this study, cluster analysis (CA), principal component analysis/factor analysis (PCA/FA) and discriminant analysis (DA) are applied to the evaluation matrix data set obtained by simulation of several control strategies applied to the plant-wide IWA Benchmark Simulation Model No 2 (BSM2). These techniques allow i) to determine natural groups or clusters of control strategies with a similar behaviour, ii) to find and interpret hidden, complex and casual relation features in the data set and iii) to identify important discriminant variables within the groups found by the cluster analysis. This study illustrates the usefulness of multivariable statistical techniques for both analysis and interpretation of the complex multicriteria data sets and allows an improved use of information for effective evaluation of control strategies.


Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


2016 ◽  
Vol 16 (11) ◽  
pp. 6977-6995 ◽  
Author(s):  
Jean-Pierre Chaboureau ◽  
Cyrille Flamant ◽  
Thibaut Dauhut ◽  
Cécile Kocha ◽  
Jean-Philippe Lafore ◽  
...  

Abstract. In the framework of the Fennec international programme, a field campaign was conducted in June 2011 over the western Sahara. It led to the first observational data set ever obtained that documents the dynamics, thermodynamics and composition of the Saharan atmospheric boundary layer (SABL) under the influence of the heat low. In support to the aircraft operation, four dust forecasts were run daily at low and high resolutions with convection-parameterizing and convection-permitting models, respectively. The unique airborne and ground-based data sets allowed the first ever intercomparison of dust forecasts over the western Sahara. At monthly scale, large aerosol optical depths (AODs) were forecast over the Sahara, a feature observed by satellite retrievals but with different magnitudes. The AOD intensity was correctly predicted by the high-resolution models, while it was underestimated by the low-resolution models. This was partly because of the generation of strong near-surface wind associated with thunderstorm-related density currents that could only be reproduced by models representing convection explicitly. Such models yield emissions mainly in the afternoon that dominate the total emission over the western fringes of the Adrar des Iforas and the Aïr Mountains in the high-resolution forecasts. Over the western Sahara, where the harmattan contributes up to 80 % of dust emission, all the models were successful in forecasting the deep well-mixed SABL. Some of them, however, missed the large near-surface dust concentration generated by density currents and low-level winds. This feature, observed repeatedly by the airborne lidar, was partly forecast by one high-resolution model only.


2015 ◽  
Vol 8 (8) ◽  
pp. 2645-2653 ◽  
Author(s):  
C. G. Nunalee ◽  
Á. Horváth ◽  
S. Basu

Abstract. Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP) modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL), is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF) model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30), the Shuttle Radar Topography Mission (SRTM), and the Global Multi-resolution Terrain Elevation Data set (GMTED2010) terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.


2010 ◽  
Vol 4 (2) ◽  
pp. 787-821 ◽  
Author(s):  
C. Hauck ◽  
M. Böttcher ◽  
H. Maurer

Abstract. Detailed knowledge of the material properties and internal structures of frozen ground is one of the prerequisites in many permafrost studies. In the absence of direct evidence, such as in-situ borehole measurements, geophysical methods are an increasingly interesting option for obtaining subsurface information on various spatial and temporal scales. The indirect nature of geophysical soundings requires a relation between the measured variables (e.g. electrical resistivity, seismic velocity) and the actual subsurface constituents (rock, water, air, ice). In this work we present a model, which provides estimates of the volumetric fractions of these four phases from tomographic electrical and seismic images. The model is tested using geophysical data sets from two rock glaciers in the Swiss Alps, where ground truth information in form of borehole data is available. First results confirm the applicability of the so-called 4-phase model, which allows to quantify the contributions of ice-, water- and air within permafrost areas as well as detecting the firm bedrock. Apart from a similarly thick active layer with enhanced air content for both rock glaciers, the two case studies revealed a heterogeneous distribution of ice and unfrozen water within rock glacier Muragl, where bedrock was detected at depths of 20–25 m, but a comparatively homogeneous ice body with only minor heterogeneities within rock glacier Murtèl.


2019 ◽  
Author(s):  
Silvia Salas-Romero ◽  
Alireza Malehmir ◽  
Ian Snowball ◽  
Benoît Dessirier

Abstract. Quick-clay landslides are common geohazards in Nordic countries and Canada. The presence of potential quick clays is confirmed using geotechnical investigations, but near-surface geophysical methods, such as seismic and resistivity surveys, can also help identifying coarse-grained materials associated to the development of quick clays. We present the results of reflection seismic investigations on land and in part of the Göta River in Sweden, along which many quick-clay landslide scars exist. This is the first time that such a large-scale reflection seismic investigation has been carried out to study the subsurface structures associated with quick-clay landslides. The results also show a reasonable correlation with the radio magnetotelluric and traveltime tomography models. The morphology of the river bottom and riverbanks, as e.g. subaquatic landslide deposits, is shown by side-scan sonar and bathymetric data. Undulating bedrock, covered by subhorizontal sedimentary glacial and postglacial deposits is clearly revealed. An extensive coarse-grained layer exists in the sedimentary sequence and is interpreted and modelled in a regional context. Individual fractures and fracture zones are identified within bedrock and sediments. Hydrological modelling of the coarse-grained layer confirms its potential for transporting fresh water infiltrated in fractures and nearby outcrops. The groundwater flow in the coarse-grained layer promotes leaching of marine salts from the overlying clays by slow infiltration and/or diffusion, which helps in the formation of potential quick clays. Magnetic data show coarse-grained materials at the landslide scar located in the study area, which may have acted as a sliding surface together with quick clays.


2020 ◽  
Vol 224 (1) ◽  
pp. 40-68 ◽  
Author(s):  
Thibaut Astic ◽  
Lindsey J Heagy ◽  
Douglas W Oldenburg

SUMMARY In a previous paper, we introduced a framework for carrying out petrophysically and geologically guided geophysical inversions. In that framework, petrophysical and geological information is modelled with a Gaussian mixture model (GMM). In the inversion, the GMM serves as a prior for the geophysical model. The formulation and applications were confined to problems in which a single physical property model was sought, and a single geophysical data set was available. In this paper, we extend that framework to jointly invert multiple geophysical data sets that depend on multiple physical properties. The petrophysical and geological information is used to couple geophysical surveys that, otherwise, rely on independent physics. This requires advancements in two areas. First, an extension from a univariate to a multivariate analysis of the petrophysical data, and their inclusion within the inverse problem, is necessary. Secondly, we address the practical issues of simultaneously inverting data from multiple surveys and finding a solution that acceptably reproduces each one, along with the petrophysical and geological information. To illustrate the efficacy of our approach and the advantages of carrying out multi-physics inversions coupled with petrophysical and geological information, we invert synthetic gravity and magnetic data associated with a kimberlite deposit. The kimberlite pipe contains two distinct facies embedded in a host rock. Inverting the data sets individually, even with petrophysical information, leads to a binary geological model: background or undetermined kimberlite. A multi-physics inversion, with petrophysical information, differentiates between the two main kimberlite facies of the pipe. Through this example, we also highlight the capabilities of our framework to work with interpretive geological assumptions when minimal quantitative information is available. In those cases, the dynamic updates of the GMM allow us to perform multi-physics inversions by learning a petrophysical model.


2019 ◽  
Vol 491 (3) ◽  
pp. 3290-3317 ◽  
Author(s):  
Oliver H E Philcox ◽  
Daniel J Eisenstein ◽  
Ross O’Connell ◽  
Alexander Wiegand

ABSTRACT To make use of clustering statistics from large cosmological surveys, accurate and precise covariance matrices are needed. We present a new code to estimate large-scale galaxy two-point correlation function (2PCF) covariances in arbitrary survey geometries that, due to new sampling techniques, runs ∼104 times faster than previous codes, computing finely binned covariance matrices with negligible noise in less than 100 CPU-hours. As in previous works, non-Gaussianity is approximated via a small rescaling of shot noise in the theoretical model, calibrated by comparing jackknife survey covariances to an associated jackknife model. The flexible code, rascalc, has been publicly released, and automatically takes care of all necessary pre- and post-processing, requiring only a single input data set (without a prior 2PCF model). Deviations between large-scale model covariances from a mock survey and those from a large suite of mocks are found to be indistinguishable from noise. In addition, the choice of input mock is shown to be irrelevant for desired noise levels below ∼105 mocks. Coupled with its generalization to multitracer data sets, this shows the algorithm to be an excellent tool for analysis, reducing the need for large numbers of mock simulations to be computed.


2020 ◽  
Vol 39 (5) ◽  
pp. 324-331
Author(s):  
Gary Murphy ◽  
Vanessa Brown ◽  
Denes Vigh

As part of a wide-reaching full-waveform inversion (FWI) research program, FWI is applied to an onshore seismic data set collected in the Delaware Basin, west Texas. FWI is routinely applied on typical marine data sets with high signal-to-noise ratio (S/N), relatively good low-frequency content, and reasonably long offsets. Land seismic data sets, in comparison, present significant challenges for FWI due to low S/N, a dearth of low frequencies, and limited offsets. Recent advancements in FWI overcome limitations due to poor S/N and low frequencies making land FWI feasible to use to update the shallow velocities. The chosen area has contrasting and variable near-surface conditions providing an excellent test data set on which to demonstrate the workflow and its challenges. An acoustic FWI workflow is used to update the near-surface velocity model in order to improve the deeper image and simultaneously help highlight potential shallow drilling hazards.


Sign in / Sign up

Export Citation Format

Share Document