A Bayesian approach to modeling 2D gravity data using polygons

Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. G1-G21 ◽  
Author(s):  
William J. Titus ◽  
Sarah J. Titus ◽  
Joshua R. Davis

We apply a Bayesian Markov chain Monte Carlo formalism to the gravity inversion of a single localized 2D subsurface object. The object is modeled as a polygon described by five parameters: the number of vertices, a density contrast, a shape-limiting factor, and the width and depth of an encompassing container. We first constrain these parameters with an interactive forward model and explicit geologic information. Then, we generate an approximate probability distribution of polygons for a given set of parameter values. From these, we determine statistical distributions such as the variance between the observed and model fields, the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the subsurface object). We introduce replica exchange to mitigate trapping in local optima and to compute model probabilities and their uncertainties. We apply our techniques to synthetic data sets and a natural data set collected across the Rio Grande Gorge Bridge in New Mexico. On the basis of our examples, we find that the occupancy probability is useful in visualizing the results, giving a “hazy” cross section of the object. We also find that the role of the container is important in making predictions about the subsurface object.

Geophysics ◽  
2006 ◽  
Vol 71 (6) ◽  
pp. G301-G312 ◽  
Author(s):  
Ross Brodie ◽  
Malcolm Sambridge

We have developed a holistic method for simultaneously calibrating, processing, and inverting frequency-domain airborne electromagnetic data. A spline-based, 3D, layered conductivity model covering the complete survey area was recovered through inversion of the entire raw airborne data set and available independent conductivity and interface-depth data. The holistic inversion formulation includes a mathematical model to account for systematic calibration errors such as incorrect gain and zero-level drift. By taking these elements into account in the inversion, the need to preprocess the airborne data prior to inversion is eliminated. Conventional processing schemes involve the sequential application of a number of calibration corrections, with data from each frequency treated separately. This is followed by inversion of each multifrequency sample in isolation from other samples.By simultaneously considering all of the available information in a holistic inversion, we are able to exploit interfrequency and spatial-coherency characteristics of the data. The formulation ensures that the conductivity and calibration models are optimal with respect to the airborne data and prior information. Introduction of interfrequency inconsistency and multistage error propagation stemming from the sequential nature of conventional processing schemes is also avoided. We confirm that accurate conductivity and calibration parameter values are recovered from holistic inversion of synthetic data sets. We demonstrate that the results from holistic inversion of raw survey data are superior to the output of conventional 1D inversion of final processed data. In addition to the technical benefits, we expect that holistic inversion will reduce costs by avoiding the expensive calibration-processing-recalibration paradigm. Furthermore, savings may also be made because specific high-altitude zero-level observations, needed for conventional processing, may not be required.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Author(s):  
Tomas Gro¨nstedt ◽  
Markus Wallin

Recent work on gas turbine diagnostics based on optimisation techniques advocates two different approaches: 1) Stochastic optimisation, including Genetic Algorithm techniques, for its robustness when optimising objective functions with many local optima and 2) Gradient based methods mainly for their computational efficiency. For smooth and single optimum functions, gradient methods are known to provide superior numerical performance. This paper addresses the key issue for method selection, i.e. whether multiple local optima may occur when the optimisation approach is applied to real engine testing. Two performance test data sets for the RM12 low bypass ratio turbofan engine, powering the Swedish Fighter Gripen, have been analysed. One set of data was recorded during performance testing of a highly degraded engine. This engine has been subjected to Accelerated Mission Testing (AMT) cycles corresponding to more than 4000 hours of run time. The other data set was recorded for a development engine with less than 200 hours of operation. The search for multiple optima was performed starting from more than 100 extreme points. Not a single case of multi-modality was encountered, i.e. one unique solution for each of the two data sets was consistently obtained. The RM12 engine cycle is typical for a modern fighter engine, implying that the obtained results can be transferred to, at least, most low bypass ratio turbofan engines. The paper goes on to describe the numerical difficulties that had to be resolved to obtain efficient and robust performance by the gradient solvers. Ill conditioning and noise may, as illustrated on a model problem, introduce local optima without a correspondence in the gas turbine physics. Numerical methods exploiting the special problem structure represented by a non-linear least squares formulation is given special attention. Finally, a mixed norm allowing for both robustness and numerical efficiency is suggested.


Author(s):  
Danlei Xu ◽  
Lan Du ◽  
Hongwei Liu ◽  
Penghui Wang

A Bayesian classifier for sparsity-promoting feature selection is developed in this paper, where a set of nonlinear mappings for the original data is performed as a pre-processing step. The linear classification model with such mappings from the original input space to a nonlinear transformation space can not only construct the nonlinear classification boundary, but also realize the feature selection for the original data. A zero-mean Gaussian prior with Gamma precision and a finite approximation of Beta process prior are used to promote sparsity in the utilization of features and nonlinear mappings in our model, respectively. We derive the Variational Bayesian (VB) inference algorithm for the proposed linear classifier. Experimental results based on the synthetic data set, measured radar data set, high-dimensional gene expression data set, and several benchmark data sets demonstrate the aggressive and robust feature selection capability and comparable classification accuracy of our method comparing with some other existing classifiers.


Geophysics ◽  
2017 ◽  
Vol 82 (3) ◽  
pp. R199-R217 ◽  
Author(s):  
Xintao Chai ◽  
Shangxu Wang ◽  
Genyang Tang

Seismic data are nonstationary due to subsurface anelastic attenuation and dispersion effects. These effects, also referred to as the earth’s [Formula: see text]-filtering effects, can diminish seismic resolution. We previously developed a method of nonstationary sparse reflectivity inversion (NSRI) for resolution enhancement, which avoids the intrinsic instability associated with inverse [Formula: see text] filtering and generates superior [Formula: see text] compensation results. Applying NSRI to data sets that contain multiples (addressing surface-related multiples only) requires a demultiple preprocessing step because NSRI cannot distinguish primaries from multiples and will treat them as interference convolved with incorrect [Formula: see text] values. However, multiples contain information about subsurface properties. To use information carried by multiples, with the feedback model and NSRI theory, we adapt NSRI to the context of nonstationary seismic data with surface-related multiples. Consequently, not only are the benefits of NSRI (e.g., circumventing the intrinsic instability associated with inverse [Formula: see text] filtering) extended, but also multiples are considered. Our method is limited to be a 1D implementation. Theoretical and numerical analyses verify that given a wavelet, the input [Formula: see text] values primarily affect the inverted reflectivities and exert little effect on the estimated multiples; i.e., multiple estimation need not consider [Formula: see text] filtering effects explicitly. However, there are benefits for NSRI considering multiples. The periodicity and amplitude of the multiples imply the position of the reflectivities and amplitude of the wavelet. Multiples assist in overcoming scaling and shifting ambiguities of conventional problems in which multiples are not considered. Experiments using a 1D algorithm on a synthetic data set, the publicly available Pluto 1.5 data set, and a marine data set support the aforementioned findings and reveal the stability, capabilities, and limitations of the proposed method.


Geophysics ◽  
2019 ◽  
Vol 84 (5) ◽  
pp. E293-E299
Author(s):  
Jorlivan L. Correa ◽  
Paulo T. L. Menezes

Synthetic data provided by geoelectric earth models are a powerful tool to evaluate a priori a controlled-source electromagnetic (CSEM) workflow effectiveness. Marlim R3D (MR3D) is an open-source complex and realistic geoelectric model for CSEM simulations of the postsalt turbiditic reservoirs at the Brazilian offshore margin. We have developed a 3D CSEM finite-difference time-domain forward study to generate the full-azimuth CSEM data set for the MR3D earth model. To that end, we fabricated a full-azimuth survey with 45 towlines striking the north–south and east–west directions over a total of 500 receivers evenly spaced at 1 km intervals along the rugged seafloor of the MR3D model. To correctly represent the thin, disconnected, and complex geometries of the studied reservoirs, we have built a finely discretized mesh of [Formula: see text] cells leading to a large mesh with a total of approximately 90 million cells. We computed the six electromagnetic field components (Ex, Ey, Ez, Hx, Hy, and Hz) at six frequencies in the range of 0.125–1.25 Hz. In our efforts to mimic noise in real CSEM data, we summed to the data a multiplicative noise with a 1% standard deviation. Both CSEM data sets (noise free and noise added), with inline and broadside geometries, are distributed for research or commercial use, under the Creative Common License, at the Zenodo platform.


Geophysics ◽  
2014 ◽  
Vol 79 (4) ◽  
pp. EN77-EN90 ◽  
Author(s):  
Paolo Bergamo ◽  
Laura Valentina Socco

Surface-wave (SW) techniques are mainly used to retrieve 1D velocity models and are therefore characterized by a 1D approach, which might prove unsatisfactory when relevant 2D effects are present in the investigated subsurface. In the case of sharp and sudden lateral heterogeneities in the subsurface, a strategy to tackle this limitation is to estimate the location of the discontinuities and to separately process seismic traces belonging to quasi-1D subsurface portions. We have addressed our attention to methods aimed at locating discontinuities by identifying anomalies in SW propagation and attenuation. The considered methods are the autospectrum computation and the attenuation analysis of Rayleigh waves (AARW). These methods were developed for purposes and/or scales of analysis that are different from those of this work, which aims at detecting and characterizing sharp subvertical discontinuities in the shallow subsurface. We applied both methods to two data sets, synthetic data from a finite-element method simulation and a field data set acquired over a fault system, both presenting an abrupt lateral variation perpendicularly crossing the acquisition line. We also extended the AARW method to the detection of sharp discontinuities from large and multifold data sets and we tested these novel procedures on the field case. The two methods are proven to be effective for the detection of the discontinuity, by portraying propagation phenomena linked to the presence of the heterogeneity, such as the interference between incident and reflected wavetrains, and energy concentration as well as subsequent decay at the fault location. The procedures we developed for the processing of multifold seismic data set showed to be reliable tools in locating and characterizing subvertical sharp heterogeneities.


Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. G129-G141
Author(s):  
Diego Takahashi ◽  
Vanderlei C. Oliveira Jr. ◽  
Valéria C. F. Barbosa

We have developed an efficient and very fast equivalent-layer technique for gravity data processing by modifying an iterative method grounded on an excess mass constraint that does not require the solution of linear systems. Taking advantage of the symmetric block-Toeplitz Toeplitz-block (BTTB) structure of the sensitivity matrix that arises when regular grids of observation points and equivalent sources (point masses) are used to set up a fictitious equivalent layer, we develop an algorithm that greatly reduces the computational complexity and RAM memory necessary to estimate a 2D mass distribution over the equivalent layer. The structure of symmetric BTTB matrix consists of the elements of the first column of the sensitivity matrix, which, in turn, can be embedded into a symmetric block-circulant with circulant-block (BCCB) matrix. Likewise, only the first column of the BCCB matrix is needed to reconstruct the full sensitivity matrix completely. From the first column of the BCCB matrix, its eigenvalues can be calculated using the 2D fast Fourier transform (2D FFT), which can be used to readily compute the matrix-vector product of the forward modeling in the fast equivalent-layer technique. As a result, our method is efficient for processing very large data sets. Tests with synthetic data demonstrate the ability of our method to satisfactorily upward- and downward-continue gravity data. Our results show very small border effects and noise amplification compared to those produced by the classic approach in the Fourier domain. In addition, they show that, whereas the running time of our method is [Formula: see text] s for processing [Formula: see text] observations, the fast equivalent-layer technique used [Formula: see text] s with [Formula: see text]. A test with field data from the Carajás Province, Brazil, illustrates the low computational cost of our method to process a large data set composed of [Formula: see text] observations.


2020 ◽  
Author(s):  
Kristel Izquierdo ◽  
Laurent Montesi ◽  
Vedran Lekic

<p>The shape and location of density anomalies inside the Moon provide insights into processes that produced them and their subsequent evolution. Gravity measurements provide the most complete data set to infer these anomalies on the Moon [1]. However, gravity inversions suffer from inherent non-uniqueness. To circumvent this issue, it is often assumed that the Bouguer gravity anomalies are produced by the relief of the crust-mantle or other internal interface [2]. This approach limits the recovery of 3D density anomalies or any anomaly at different depths. In this work, we develop an algorithm that provides a set of likely three-dimensional models consistent with the observed gravity data with no need to constrain the depth of anomalies a priori.</p><p>The volume of a sphere is divided in 6480 tesseroids and n Voronoi regions. The algorithm first assigns a density value to each Voronoi region, which can encompass one or more tesseroids. At each iteration, it can add or delete a region, or change its location [2, 3]. The optimal density of each region is then obtained by linear inversion of the gravity field and the likelihood of the solution is calculated using Bayes’ theorem. After convergence, the algorithm then outputs an ensemble of models with good fit to the observed data and high posterior probability. The ensemble might contain essentially similar interior density distribution models or many different ones, providing a view of the non-uniqueness of the inversion results.</p><p>We use the lunar radial gravity acceleration obtained by the GRAIL mission [4] up to spherical harmonic degree 400 as input data in the algorithm. The gravity acceleration data of the resulting models match the input gravity very well, only missing the gravity signature of smaller craters. A group of models show a deep positive density anomaly in the general area of the Clavius basin. The anomaly is centered at approximately 50°S and 10°E, at about 800 km depth. Density anomalies in this group of models remain relatively small and could be explained by mineralogical differences in the mantle. Major variations in crustal structure, such as the near side / far side dichotomy and the South Pole Aitken basin are also apparent, giving geological credence to these models. A different group of models points towards two high density regions with a much higher mass than the one described by the other group of models. It may be regarded as an unrealistic model. Our method embraces the non-uniqueness of gravity inversions and does not impose a single view of the interior although geological knowledge and geodynamic analyses are of course important to evaluate the realism of each solution.</p><p>References: [1] Wieczorek, M. A. (2006), Treatise on Geophysics 153-193. doi: 10.1016/B978-0-444-53802-4.00169-X. [2] Izquierdo, K et al. (2019) Geophys. J. Int. 220, 1687-1699, doi: 10.1093/gji/ggz544, [3]  Izquierdo, K. et al., (2019) LPSC 50, abstr. 2157. [4] Lemoine, F. G., et al. ( 2013), J. Geophys. Res. 118, 1676–1698 doi: 10.1002/jgre.20118.</p><p> </p>


2013 ◽  
Vol 411-414 ◽  
pp. 1884-1893
Author(s):  
Yong Chun Cao ◽  
Ya Bin Shao ◽  
Shuang Liang Tian ◽  
Zheng Qi Cai

Due to many of the clustering algorithms based on GAs suffer from degeneracy and are easy to fall in local optima, a novel dynamic genetic algorithm for clustering problems (DGA) is proposed. The algorithm adopted the variable length coding to represent individuals and processed the parallel crossover operation in the subpopulation with individuals of the same length, which allows the DGA algorithm clustering to explore the search space more effectively and can automatically obtain the proper number of clusters and the proper partition from a given data set; the algorithm used the dynamic crossover probability and adaptive mutation probability, which prevented the dynamic clustering algorithm from getting stuck at a local optimal solution. The clustering results in the experiments on three artificial data sets and two real-life data sets show that the DGA algorithm derives better performance and higher accuracy on clustering problems.


Sign in / Sign up

Export Citation Format

Share Document