Self-constrained inversion of potential fields through a 3D depth weighting

Geophysics ◽  
2020 ◽  
Vol 85 (6) ◽  
pp. G143-G156
Author(s):  
Andrea Vitale ◽  
Maurizio Fedi

A new method for inversion of potential fields is developed using a depth-weighting function specifically designed for fields related to complex source distributions. Such a weighting function is determined from an analysis of the field that precedes the inversion itself. The algorithm is self-consistent, meaning that the weighting used in the inversion is directly deduced from the scaling properties of the field. Hence, the algorithm is based on two steps: (1) estimation of the locally homogeneous degree of the field in a 3D domain of the harmonic region and (2) inversion of the data using a specific weighting function with a 3D variable exponent. A multiscale data set is first formed by upward continuation of the original data. Local homogeneity and a multihomogeneous model are then assumed, and a system built on the scaling function is solved at each point of the multiscale data set, yielding a multiscale set of local-homogeneity degrees of the field. Then, the estimated homogeneity degree is associated to the model weighting function in the source volume. Tests on synthetic data show that the generalization of the depth weighting to a 3D function and the proposed two-step algorithm has great potential to improve the quality of the solution. The gravity field of a polyhedron is inverted yielding a realistic reconstruction of the whole body, including the bottom surface. The inversion of the aeromagnetic real data set, from the Mt. Vulture area, also yields a good and geologically consistent reconstruction of the complex source distribution.

Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. E335-E346
Author(s):  
Lutz Mütschard ◽  
Ketil Hokstad ◽  
Torgeir Wiik ◽  
Bjørn Ursin

The measured electromagnetic field in magnetotellurics (MT) is composed of the natural source field and its subsurface response. Commonly, the data are represented as impedances, the complex ratio between the horizontal electric and magnetic fields. This measure is independent of the source distribution because the impedance-tensor estimation contains a deconvolution operator. We have used a Gauss-Newton-type 3D MT inversion scheme to compare impedance-data inversion with an inversion using the recorded electric field directly. The use of the observed electric field is beneficial to the inversion algorithm because it simplifies the estimation of the sensitivities. The direct-field approach permits the use of the observed data without processing, but it presumes knowledge of the source distribution. A method to estimate the time-variable strength and polarization of the incoming plane-wave source is presented and tested on synthetic and real-data examples. The direct-field inversion is successfully applied to a synthetic and a real data set within marine settings. A comparison with the conventional impedance inversion is conducted. The results of the synthetic data example are very similar, with a slightly more accurate reconstruction of the model in the impedance case, whereas the direct-field inversion produces a smoother inversion result when compared with the impedance case. The mapping of a resistive salt structure in the real-data example indicates deviations in the final conductivity models. The impedance inversion suggests a deeper rooted resistive structure, whereas the direct-field inversion predicts a more compact structure limited to the overburden. We have evaluated the advantages of the new approach like the simplification of the sensitivity calculation, limitations, and disadvantages like knowledge of the source distribution.


Geophysics ◽  
2017 ◽  
Vol 82 (6) ◽  
pp. S403-S409 ◽  
Author(s):  
Farzad Moradpouri ◽  
Ali Moradzadeh ◽  
Reynam Pestana ◽  
Reza Ghaedrahmati ◽  
Mehrdad Soleimani Monfared

Reverse time migration (RTM) as a full wave equation method can image steeply dipping structures incorporating all waves without dip limitation. It causes a set of low-frequency artifacts that start to appear for reflection angles larger than 60°. These artifacts are known as the major concern in RTM method. We are first to attempt to formulate a scheme called the leapfrog-rapid expansion method to extrapolate the wavefields and their first derivatives. We have evaluated a new imaging condition, based on the Poynting vectors, to suppress the RTM artifacts. The Poynting vectors information is used to separate the wavefields to their downgoing and upgoing components that form the first part of our imaging condition. The Poynting vector information is also used to calculate the reflection angles as a basis for our weighting function as the second part of the aforementioned imaging condition. Actually, the weighting function is applied to have the most likely desired information and to suppress the artifacts for the angle range of 61°–90°. This is achieved by dividing the angle range to a triplet domain from 61° to 70°, 71° to 80°, and 81° to 90°, where each part has the weight of [Formula: see text], [Formula: see text], and [Formula: see text], respectively. It is interesting to note that, besides suppressing the artifacts, the weighting function also has the capability to preserve crosscorrelation from the real reflecting points in the angle range of 61°–90°. Finally, we tested the new RTM procedure by the BP synthetic model and a real data set for the North Sea. The obtained results indicate the efficiency of the procedure to suppress the RTM artifacts in producing high-quality, highly illuminated depth-migrated image including all steeply dipping geologic structures.


2020 ◽  
Vol 224 (3) ◽  
pp. 1505-1522
Author(s):  
Saeed Parnow ◽  
Behrooz Oskooi ◽  
Giovanni Florio

SUMMARY We define a two-step procedure to obtain reliable inverse models of the distribution of electrical conductivity at depth from apparent conductivities estimated by electromagnetic instruments such as GEONICS EM38, EM31 or EM 34-3. The first step of our procedure consists in the correction of the apparent conductivities to make them consistent with a low induction number condition, for which these data are very similar to the true conductivity. Then, we use a linear inversion approach to obtain a conductivity model. To improve the conductivity estimation at depth we introduced a depth-weighting function in our regularized weighted minimum length solution algorithm. We test the whole procedure on two synthetic data sets generated by the COMSOL Multiphysics for both the vertical magnetic dipole and horizontal magnetic dipole configurations of the loops. Our technique was also tested on a real data set, and the inversion result has been compared with the one obtained using the dipole-dipole DC electrical resistivity (ER) method. Our model not only reproduces all shallow conductive areas similar to the ER model, but also succeeds in replicating its deeper conductivity structures. On the contrary, inversion of uncorrected data provides a biased model underestimating the true conductivity.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2019 ◽  
Vol 14 (2) ◽  
pp. 148-156
Author(s):  
Nighat Noureen ◽  
Sahar Fazal ◽  
Muhammad Abdul Qadir ◽  
Muhammad Tanvir Afzal

Background: Specific combinations of Histone Modifications (HMs) contributing towards histone code hypothesis lead to various biological functions. HMs combinations have been utilized by various studies to divide the genome into different regions. These study regions have been classified as chromatin states. Mostly Hidden Markov Model (HMM) based techniques have been utilized for this purpose. In case of chromatin studies, data from Next Generation Sequencing (NGS) platforms is being used. Chromatin states based on histone modification combinatorics are annotated by mapping them to functional regions of the genome. The number of states being predicted so far by the HMM tools have been justified biologically till now. Objective: The present study aimed at providing a computational scheme to identify the underlying hidden states in the data under consideration. </P><P> Methods: We proposed a computational scheme HCVS based on hierarchical clustering and visualization strategy in order to achieve the objective of study. Results: We tested our proposed scheme on a real data set of nine cell types comprising of nine chromatin marks. The approach successfully identified the state numbers for various possibilities. The results have been compared with one of the existing models as well which showed quite good correlation. Conclusion: The HCVS model not only helps in deciding the optimal state numbers for a particular data but it also justifies the results biologically thereby correlating the computational and biological aspects.


2021 ◽  
Vol 13 (9) ◽  
pp. 1703
Author(s):  
He Yan ◽  
Chao Chen ◽  
Guodong Jin ◽  
Jindong Zhang ◽  
Xudong Wang ◽  
...  

The traditional method of constant false-alarm rate detection is based on the assumption of an echo statistical model. The target recognition accuracy rate and the high false-alarm rate under the background of sea clutter and other interferences are very low. Therefore, computer vision technology is widely discussed to improve the detection performance. However, the majority of studies have focused on the synthetic aperture radar because of its high resolution. For the defense radar, the detection performance is not satisfactory because of its low resolution. To this end, we herein propose a novel target detection method for the coastal defense radar based on faster region-based convolutional neural network (Faster R-CNN). The main processing steps are as follows: (1) the Faster R-CNN is selected as the sea-surface target detector because of its high target detection accuracy; (2) a modified Faster R-CNN based on the characteristics of sparsity and small target size in the data set is employed; and (3) soft non-maximum suppression is exploited to eliminate the possible overlapped detection boxes. Furthermore, detailed comparative experiments based on a real data set of coastal defense radar are performed. The mean average precision of the proposed method is improved by 10.86% compared with that of the original Faster R-CNN.


2021 ◽  
Vol 1978 (1) ◽  
pp. 012047
Author(s):  
Xiaona Sheng ◽  
Yuqiu Ma ◽  
Jiabin Zhou ◽  
Jingjing Zhou

2021 ◽  
pp. 1-11
Author(s):  
Velichka Traneva ◽  
Stoyan Tranev

Analysis of variance (ANOVA) is an important method in data analysis, which was developed by Fisher. There are situations when there is impreciseness in data In order to analyze such data, the aim of this paper is to introduce for the first time an intuitionistic fuzzy two-factor ANOVA (2-D IFANOVA) without replication as an extension of the classical ANOVA and the one-way IFANOVA for a case where the data are intuitionistic fuzzy rather than real numbers. The proposed approach employs the apparatus of intuitionistic fuzzy sets (IFSs) and index matrices (IMs). The paper also analyzes a unique set of data on daily ticket sales for a year in a multiplex of Cinema City Bulgaria, part of Cineworld PLC Group, applying the two-factor ANOVA and the proposed 2-D IFANOVA to study the influence of “ season ” and “ ticket price ” factors. A comparative analysis of the results, obtained after the application of ANOVA and 2-D IFANOVA over the real data set, is also presented.


Genetics ◽  
1998 ◽  
Vol 149 (3) ◽  
pp. 1547-1555 ◽  
Author(s):  
Wouter Coppieters ◽  
Alexandre Kvasz ◽  
Frédéric Farnir ◽  
Juan-Jose Arranz ◽  
Bernard Grisart ◽  
...  

Abstract We describe the development of a multipoint nonparametric quantitative trait loci mapping method based on the Wilcoxon rank-sum test applicable to outbred half-sib pedigrees. The method has been evaluated on a simulated dataset and its efficiency compared with interval mapping by using regression. It was shown that the rank-based approach is slightly inferior to regression when the residual variance is homoscedastic normal; however, in three out of four other scenarios envisaged, i.e., residual variance heteroscedastic normal, homoscedastic skewed, and homoscedastic positively kurtosed, the latter outperforms the former one. Both methods were applied to a real data set analyzing the effect of bovine chromosome 6 on milk yield and composition by using a 125-cM map comprising 15 microsatellites and a granddaughter design counting 1158 Holstein-Friesian sires.


Sign in / Sign up

Export Citation Format

Share Document