Uncertainty analysis of 3D potential-field deterministic inversion using mixed Lp norms

Geophysics ◽  
2021 ◽  
pp. 1-103
Author(s):  
Xiaolong Wei ◽  
Jiajia Sun

The non-uniqueness problem in geophysical inversion, especially potential-field inversion, is widely recognized. It is argued that uncertainty analysis of a recovered model should be as important as finding an optimal model. However, quantifying uncertainty still remains challenging, especially for 3D inversions in both deterministic and Bayesian frameworks. Our objective is to develop an efficient method to empirically quantify the uncertainty of the physical property models recovered from 3D potential-field inversion. We worked in a deterministic framework where an objective function consisting of a data misfit term and a regularization term is minimized. We performed inversions using a mixed Lp-norm formulation where various combinations of L p (0 <= p <= 2) norms can be implemented on different components of the regularization term. Specifically, we randomly sampled the p-norm values in multiple times, and generated a large and diverse sequence of physical property models that all reproduce the observed geophysical data equally well. This suite of models offers practical insights into the uncertainty of the recovered model features. We quantified the uncertainty through calculation of standard deviations and interquartile range, as well as visualizations in box plots and histograms. The numerical results for a realistic synthetic density model created based on a ring-shaped igneous intrusive body quantitatively illustrate uncertainty reduction due to different amounts of prior information imposed on inversions. We also applied the method to a field data set over the Decorah area in the northeastern Iowa. We adopted an acceptance-rejection strategy to generate 31 equivalent models based on which the uncertainties of the inverted models as well as the volume and mass estimates are quantified.

2021 ◽  
Vol 11 (14) ◽  
pp. 6485
Author(s):  
Tao Song ◽  
Xing Hu ◽  
Wei Du ◽  
Lianzheng Cheng ◽  
Tiaojie Xiao ◽  
...  

As a popular population based heuristic evolutionary algorithm, differential evolution (DE) has been widely applied in various science and engineering problems. Similar to other global nonlinear algorithms, such as genetic algorithm, simulated annealing, particle swarm optimization, etc., the DE algorithm is mostly applied to resolve the parametric inverse problem, but has few applications in physical property inversion. According to our knowledge, this is the first time DE has been applied in obtaining the physical property distribution of gravity data due to causative sources embedded in the subsurface. In this work, the search direction of DE is guided by better vectors, enhancing the exploration efficiency of the mutation strategy. Besides, to reduce the over-stochastic of the DE algorithm, the perturbation directions in mutation operations are smoothed by using a weighted moving average smoothing technique, and the Lp-norm regularization term is implemented to sharpen the boundary of density distribution. Meanwhile, in the search process of DE, the effect of Lp-norm regularization term is controlled in an adaptive manner, which can always have an impact on the data misfit function. In the synthetic anomaly case, both noise-free and noisy data sets are considered. For the field case, gravity anomalies originating from the Shihe iron ore deposit in China were inverted and interpreted. The reconstructed density distribution is in good agreement with the one obtained by drill-hole information. Based on the tests in the present study, one can conclude that the Lp-norm inversion using DE is a useful tool for physical property distribution using gravity anomalies.


Geophysics ◽  
2014 ◽  
Vol 79 (1) ◽  
pp. IM1-IM9 ◽  
Author(s):  
Nathan Leon Foks ◽  
Richard Krahenbuhl ◽  
Yaoguo Li

Compressive inversion uses computational algorithms that decrease the time and storage needs of a traditional inverse problem. Most compression approaches focus on the model domain, and very few, other than traditional downsampling focus on the data domain for potential-field applications. To further the compression in the data domain, a direct and practical approach to the adaptive downsampling of potential-field data for large inversion problems has been developed. The approach is formulated to significantly reduce the quantity of data in relatively smooth or quiet regions of the data set, while preserving the signal anomalies that contain the relevant target information. Two major benefits arise from this form of compressive inversion. First, because the approach compresses the problem in the data domain, it can be applied immediately without the addition of, or modification to, existing inversion software. Second, as most industry software use some form of model or sensitivity compression, the addition of this adaptive data sampling creates a complete compressive inversion methodology whereby the reduction of computational cost is achieved simultaneously in the model and data domains. We applied the method to a synthetic magnetic data set and two large field magnetic data sets; however, the method is also applicable to other data types. Our results showed that the relevant model information is maintained after inversion despite using 1%–5% of the data.


2021 ◽  
pp. 1-22
Author(s):  
Xu Guo ◽  
Zongliang Du ◽  
Chang Liu ◽  
Shan Tang

Abstract In the present paper, a new uncertainty analysis-based framework for data-driven computational mechanics (DDCM) is established. Compared with its practical classical counterpart, the distinctive feature of this framework is that uncertainty analysis is introduced into the corresponding problem formulation explicitly. Instated of only focusing on a single solution in phase space, a solution set is sought for in order to account for the influence of the multi-source uncertainties associated with the data set on the data-driven solutions. An illustrative example provided shows that the proposed framework is not only conceptually new, but also has the potential of circumventing the intrinsic numerical difficulties pertaining to the classical DDCM framework.


2019 ◽  
Author(s):  
Matthew Gard ◽  
Derrick Hasterok ◽  
Jacqueline Halpin

Abstract. Dissemination and collation of geochemical data are critical to promote rapid, creative and accurate research and place new results in an appropriate global context. To this end, we have assembled a global whole-rock geochemical database, with other associated sample information and properties, sourced from various existing databases and supplemented with numerous individual publications and corrections. Currently the database stands at 1,023,490 samples with varying amounts of associated information including major and trace element concentrations, isotopic ratios, and location data. The distribution both spatially and temporally is quite heterogeneous, however temporal distributions are enhanced over some previous database compilations, particularly in terms of ages older than ~ 1000 Ma. Also included are a wide range of computed geochemical indices, physical property estimates and naming schema on a major element normalized version of the geochemical data for quick reference. This compilation will be useful for geochemical studies requiring extensive data sets, in particular those wishing to investigate secular temporal trends. The addition of physical properties, estimated by sample chemistry, represents a unique contribution to otherwise similar geochemical databases. The data is published in .csv format for the purposes of simple distribution but exists in a format acceptable for database management systems (e.g. SQL). One can either manipulate this data using conventional analysis tools such as MATLAB®, Microsoft® Excel, or R, or upload to a relational database management system for easy querying and management of the data as unique keys already exist. This data set will continue to grow, and we encourage readers to contact us or other database compilations contained within about any data that is yet to be included. The data files described in this paper are available at https://doi.org/10.5281/zenodo.2592823 (Gard et al., 2019).


Author(s):  
Leonid Gutkin ◽  
Suresh Datla ◽  
Christopher Manu

Canadian Nuclear Standard CSA N285.8, “Technical requirements for in-service evaluation of zirconium alloy pressure tubes in CANDU® reactors”(1), permits the use of probabilistic methods when assessments of the reactor core are performed. A non-mandatory annex has been proposed for inclusion in the CSA Standard N285.8 to provide guidelines for performing uncertainty analysis in probabilistic fitness-for-service evaluations within the scope of this Standard, such as the probabilistic evaluation of leak-before-break. The proposed annex outlines the general approach to uncertainty analysis as being comprised of the following major activities: identification of influential variables, characterization of uncertainties in influential variables, and subsequent propagation of these uncertainties through the evaluation framework or code. The proposed methodology distinguishes between two types of non-deterministic variables by the method used to obtain their best estimate. Uncertainties are classified by their source, and different uncertainty components are considered when the best estimates for the variables of interest are obtained using calibrated parametric models or analyses and when these estimates are obtained using statistical models or analyses. The application of the proposed guidelines for uncertainty analysis was exercised by performing a pilot study for one of the evaluations within the scope of the CSA Standard N285.8, the probabilistic evaluation of leak-before-break based on a postulated through-wall crack. The pilot study was performed for a representative CANDU reactor unit using the recently developed software code P-LBB that complies with the requirements of Canadian Nuclear Standard CSA N286.7 for quality assurance of analytical, scientific, and design computer programs for nuclear power plants. This paper discusses the approaches used and the results obtained in the second stage of this pilot study, the uncertainty characterization of influential variables identified as discussed in the companion paper presented at the PVP 2018 Conference (PVP2018-85010). In the proposed methodology, statistical assessment and expert judgment are recognized as two complementary approaches to uncertainty characterization. In this pilot study, the uncertainty characterization was limited to cases where statistical assessment could be used as the primary approach. Parametric uncertainty and uncertainty due to numerical solutions were considered as the uncertainty components for variables represented by parametric models. Residual uncertainty and uncertainty due to imbalances in the model-basis data set were considered as the uncertainty components for variables represented by statistical models. In general, the uncertainty due to numerical solutions was found to be substantially smaller than the parametric uncertainty for variables represented by parametric models, and the uncertainty due to imbalances in the model basis data set was found to be substantially smaller than the residual uncertainty for variables represented by statistical models.


Sign in / Sign up

Export Citation Format

Share Document