Magnetization clustering inversion — Part 2: Assessing the uncertainty of recovered magnetization directions

Geophysics ◽  
2019 ◽  
Vol 84 (4) ◽  
pp. J17-J29 ◽  
Author(s):  
Jiajia Sun ◽  
Yaoguo Li

The unknown magnetization directions in the presence of remanence have posed great challenges for interpreting magnetic data. Estimating magnetization directions based on magnetic measurements, therefore, has been an active area of research within the applied geophysics community. Despite the availability of several methods for estimating magnetization directions, quantifying the uncertainty of such estimates has remained untackled. We have investigated the use of the magnetization-clustering inversion (MCI) method for the purpose of assessing the uncertainty of the recovered magnetization directions. Specifically, we have leveraged the fact that the number of clusters that one expects to see among the magnetization directions recovered from MCI needs to be supplied by a user. We propose to implement a sequence of MCIs by assuming a series of different cluster numbers, and subsequently, to calculate the standard deviations of the recovered magnetization directions at each location in a model as a practical way of quantifying the uncertainty of the estimated magnetization directions. We have developed two different methods for the calculations of the standard deviations, and have also investigated the maximum number of clusters that one needs to consider to reliably assess the uncertainty. After the proof-of-concept study on a synthetic data set, we applied our methods to a field data set from an iron-oxide-copper-gold deposit exploration in the Carajás Mineral Province, Brazil. The high-confidence zones that correspond to low-uncertainty zones indicate a high spatial correspondence with the mineralization zones inferred from the drillholes and geology.

Geophysics ◽  
2002 ◽  
Vol 67 (6) ◽  
pp. 1753-1768 ◽  
Author(s):  
Yuji Mitsuhata ◽  
Toshihiro Uchida ◽  
Hiroshi Amano

Interpretation of controlled‐source electromagnetic (CSEM) data is usually based on 1‐D inversions, whereas data of direct current (dc) resistivity and magnetotelluric (MT) measurements are commonly interpreted by 2‐D inversions. We have developed an algorithm to invert frequency‐Domain vertical magnetic data generated by a grounded‐wire source for a 2‐D model of the earth—a so‐called 2.5‐D inversion. To stabilize the inversion, we adopt a smoothness constraint for the model parameters and adjust the regularization parameter objectively using a statistical criterion. A test using synthetic data from a realistic model reveals the insufficiency of only one source to recover an acceptable result. In contrast, the joint use of data generated by a left‐side source and a right‐side source dramatically improves the inversion result. We applied our inversion algorithm to a field data set, which was transformed from long‐offset transient electromagnetic (LOTEM) data acquired in a Japanese oil and gas field. As demonstrated by the synthetic data set, the inversion of the joint data set automatically converged and provided a better resultant model than that of the data generated by each source. In addition, our 2.5‐D inversion accounted for the reversals in the LOTEM measurements, which is impossible using 1‐D inversions. The shallow parts (above about 1 km depth) of the final model obtained by our 2.5‐D inversion agree well with those of a 2‐D inversion of MT data.


Geophysics ◽  
2016 ◽  
Vol 81 (3) ◽  
pp. J47-J60 ◽  
Author(s):  
Nathan Leon Foks ◽  
Yaoguo Li

Boundary extraction is a collective term that we use for the process of extracting the locations of faults, lineaments, and lateral boundaries between geologic units using geophysical observations, such as measurements of the magnetic field. The process typically begins with a preprocessing stage, where the data are transformed to enhance the visual clarity of pertinent features and hence improve the interpretability of the data. The majority of the existing methods are based on raster grid enhancement techniques, and the boundaries are extracted as a series of points or line segments. In contrast, we set out a methodology for boundary extraction from magnetic data, in which we represent the transformed data as a surface in 3D using a mesh of triangular facets. After initializing the mesh, we modify the node locations, such that the mesh smoothly represents the transformed data and that facet edges are aligned with features in the data that approximate the horizontal locations of subsurface boundaries. To illustrate our boundary extraction algorithm, we first apply it to a synthetic data set. We then apply it to identify boundaries in a magnetic data set from the McFaulds Lake area in Ontario, Canada. The extracted boundaries are in agreement with known boundaries and several of the regions that are completely enclosed by extracted boundaries coincide with regions of known mineralization.


Geophysics ◽  
2003 ◽  
Vol 68 (3) ◽  
pp. 996-1007 ◽  
Author(s):  
Fabio Caratori Tontini ◽  
Osvaldo Faggioni ◽  
Nicolò Beverini ◽  
Cosmo Carmisciano

We describe an inversion method for 3D geomagnetic data based on approximation of the source distribution by means of positive constrained Gaussian functions. In this way, smoothness and positivity are automatically imposed on the source without any subjective input from the user apart from selecting the number of functions to use. The algorithm has been tested with synthetic data in order to resolve sources at very different depths, using data from one measurement plane only. The forward modeling is based on prismatic cell parameterization, but the algebraic nonuniqueness is reduced because a relationship among the cells, expressed by the Gaussian envelope, is assumed to describe the spatial variation of the source distribution. We assume that there is no remanent magnetization and that the magnetic data are produced by induced magnetization only, neglecting any demagnetization effects. The algorithm proceeds by minimization of a χ2 misfit function between real and predicted data using a nonlinear Levenberg‐Marquardt iteration scheme, easily implemented on a desktop PC, without any additional regularization. We demonstrate the robustness and utility of the method using synthetic data corrupted by pseudorandom generated noise and a real field data set.


Author(s):  
Raul E. Avelar ◽  
Karen Dixon ◽  
Boniphace Kutela ◽  
Sam Klump ◽  
Beth Wemple ◽  
...  

The calibration of safety performance functions (SPFs) is a mechanism included in the Highway Safety Manual (HSM) to adjust SPFs in the HSM for use in intended jurisdictions. Critically, the quality of the calibration procedure must be assessed before using the calibrated SPFs. Multiple resources to aid practitioners in calibrating SPFs have been developed in the years following the publication of the HSM 1st edition. Similarly, the literature suggests multiple ways to assess the goodness-of-fit (GOF) of a calibrated SPF to a data set from a given jurisdiction. This paper uses the calibration results of multiple intersection SPFs to a large Mississippi safety database to examine the relations between multiple GOF metrics. The goal is to develop a sensible single index that leverages the joint information from multiple GOF metrics to assess overall quality of calibration. A factor analysis applied to the calibration results revealed three underlying factors explaining 76% of the variability in the data. From these results, the authors developed an index and performed a sensitivity analysis. The key metrics were found to be, in descending order: the deviation of the cumulative residual (CURE) plot from the 95% confidence area, the mean absolute deviation, the modified R-squared, and the value of the calibration factor. This paper also presents comparisons between the index and alternative scoring strategies, as well as an effort to verify the results using synthetic data. The developed index is recommended to comprehensively assess the quality of the calibrated intersection SPFs.


Water ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Elahe Jamalinia ◽  
Faraz S. Tehrani ◽  
Susan C. Steele-Dunne ◽  
Philip J. Vardon

Climatic conditions and vegetation cover influence water flux in a dike, and potentially the dike stability. A comprehensive numerical simulation is computationally too expensive to be used for the near real-time analysis of a dike network. Therefore, this study investigates a random forest (RF) regressor to build a data-driven surrogate for a numerical model to forecast the temporal macro-stability of dikes. To that end, daily inputs and outputs of a ten-year coupled numerical simulation of an idealised dike (2009–2019) are used to create a synthetic data set, comprising features that can be observed from a dike surface, with the calculated factor of safety (FoS) as the target variable. The data set before 2018 is split into training and testing sets to build and train the RF. The predicted FoS is strongly correlated with the numerical FoS for data that belong to the test set (before 2018). However, the trained model shows lower performance for data in the evaluation set (after 2018) if further surface cracking occurs. This proof-of-concept shows that a data-driven surrogate can be used to determine dike stability for conditions similar to the training data, which could be used to identify vulnerable locations in a dike network for further examination.


2013 ◽  
Vol 321-324 ◽  
pp. 1947-1950
Author(s):  
Lei Gu ◽  
Xian Ling Lu

In the initialization of the traditional k-harmonic means clustering, the initial centers are generated randomly and its number is equal to the number of clusters. Although the k-harmonic means clustering is insensitive to the initial centers, this initialization method cannot improve clustering performance. In this paper, a novel k-harmonic means clustering based on multiple initial centers is proposed. The number of the initial centers is more than the number of clusters in this new method. The new method with multiple initial centers can divide the whole data set into multiple groups and combine these groups into the final solution. Experiments show that the presented algorithm can increase the better clustering accuracies than the traditional k-means and k-harmonic methods.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2014 ◽  
Vol 7 (3) ◽  
pp. 781-797 ◽  
Author(s):  
P. Paatero ◽  
S. Eberly ◽  
S. G. Brown ◽  
G. A. Norris

Abstract. The EPA PMF (Environmental Protection Agency positive matrix factorization) version 5.0 and the underlying multilinear engine-executable ME-2 contain three methods for estimating uncertainty in factor analytic models: classical bootstrap (BS), displacement of factor elements (DISP), and bootstrap enhanced by displacement of factor elements (BS-DISP). The goal of these methods is to capture the uncertainty of PMF analyses due to random errors and rotational ambiguity. It is shown that the three methods complement each other: depending on characteristics of the data set, one method may provide better results than the other two. Results are presented using synthetic data sets, including interpretation of diagnostics, and recommendations are given for parameters to report when documenting uncertainty estimates from EPA PMF or ME-2 applications.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. C81-C92 ◽  
Author(s):  
Helene Hafslund Veire ◽  
Hilde Grude Borgos ◽  
Martin Landrø

Effects of pressure and fluid saturation can have the same degree of impact on seismic amplitudes and differential traveltimes in the reservoir interval; thus, they are often inseparable by analysis of a single stacked seismic data set. In such cases, time-lapse AVO analysis offers an opportunity to discriminate between the two effects. We quantify the uncertainty in estimations to utilize information about pressure- and saturation-related changes in reservoir modeling and simulation. One way of analyzing uncertainties is to formulate the problem in a Bayesian framework. Here, the solution of the problem will be represented by a probability density function (PDF), providing estimations of uncertainties as well as direct estimations of the properties. A stochastic model for estimation of pressure and saturation changes from time-lapse seismic AVO data is investigated within a Bayesian framework. Well-known rock physical relationships are used to set up a prior stochastic model. PP reflection coefficient differences are used to establish a likelihood model for linking reservoir variables and time-lapse seismic data. The methodology incorporates correlation between different variables of the model as well as spatial dependencies for each of the variables. In addition, information about possible bottlenecks causing large uncertainties in the estimations can be identified through sensitivity analysis of the system. The method has been tested on 1D synthetic data and on field time-lapse seismic AVO data from the Gullfaks Field in the North Sea.


Sign in / Sign up

Export Citation Format

Share Document