Estimating the nature and the horizontal and vertical positions of 3D magnetic sources using Euler deconvolution

Geophysics ◽  
2013 ◽  
Vol 78 (6) ◽  
pp. J87-J98 ◽  
Author(s):  
Felipe F. Melo ◽  
Valeria C. F. Barbosa ◽  
Leonardo Uieda ◽  
Vanderlei C. Oliveira Jr. ◽  
João B. C. Silva

We have developed a new method that drastically reduces the number of the source location estimates in Euler deconvolution to only one per anomaly. Our method employs the analytical estimators of the base level and of the horizontal and vertical source positions in Euler deconvolution as a function of the [Formula: see text]- and [Formula: see text]-coordinates of the observations. By assuming any tentative structural index (defining the geometry of the sources), our method automatically locates plateaus, on the maps of the horizontal coordinate estimates, indicating consistent estimates that are very close to the true corresponding coordinates. These plateaus are located in the neighborhood of the highest values of the anomaly and show a contrasting behavior with those estimates that form inclined planes at the anomaly borders. The plateaus are automatically located on the maps of the horizontal coordinate estimates by fitting a first-degree polynomial to these estimates in a moving-window scheme spanning all estimates. The positions where the angular coefficient estimates are closest to zero identify the plateaus of the horizontal coordinate estimates. The sample means of these horizontal coordinate estimates are the best horizontal location estimates. After mapping each plateau, our method takes as the best structural index the one that yields the minimum correlation between the total-field anomaly and the estimated base level over each plateau. By using the estimated structural index for each plateau, our approach extracts the vertical coordinate estimates over the corresponding plateau. The sample means of these estimates are the best depth location estimates in our method. When applied to synthetic data, our method yielded good results if the bodies produce weak- and mid-interfering anomalies. A test on real data over intrusions in the Goiás Alkaline Province, Brazil, retrieved sphere-like sources suggesting 3D bodies.

Geophysics ◽  
2018 ◽  
Vol 83 (6) ◽  
pp. J87-J98 ◽  
Author(s):  
Felipe F. Melo ◽  
Valéria C. F. Barbosa

In most applications, the Euler deconvolution aims to define the nature (type) of the geologic source (i.e., the structural index [SI]) and its depth position. However, Euler deconvolution also estimates the horizontal positions of the sources and the base level of the magnetic anomaly. To determine the correct SI, most authors take advantage of the clustering of depth estimates. We have analyzed Euler’s equation to indicate that random variables contaminating the magnetic observations and its gradients affect the base-level estimates if, and only if, the SI is not assumed correctly. Grounded on this theoretical analysis and assuming a set of tentative SIs, we have developed a new criterion for determining the correct SI by means of the minimum standard deviation of base-level estimates. We performed synthetic tests simulating multiple magnetic sources with different SIs. To produce mid and strongly interfering synthetic magnetic anomalies, we added constant and nonlinear backgrounds to the anomalies and approximated the simulated sources laterally. If the magnetic anomalies are weakly interfering, the minima standard deviations either of the depth or base-level estimates can be used to determine the correct SI. However, if the magnetic anomalies are strongly interfering, only the minimum standard deviation of the base-level estimates can determine the SI correctly. These tests also show that Euler deconvolution does not require that the magnetic data be corrected for the regional fields (e.g., International Geomagnetic Reference Field [IGRF]). Tests on real data from part of the Goiás Alkaline Province, Brazil, confirm the potential of the minimum standard deviation of base-level estimates in determining the SIs of the sources by applying Euler deconvolution either to total-field measurements or to total-field anomaly (corrected for IGRF). Our result suggests three plug intrusions giving rise to the Diorama anomaly and dipole-like sources yielding Arenópolis and Montes Claros de Goiás anomalies.


2017 ◽  
Author(s):  
Felipe Ferreira de Melo ◽  
Valeria Cristina Ferreira Barbosa

2021 ◽  
Vol 13 (22) ◽  
pp. 4713
Author(s):  
Jean-Emmanuel Deschaud ◽  
David Duque ◽  
Jean Pierre Richa ◽  
Santiago Velasco-Forero ◽  
Beatriz Marcotegui ◽  
...  

Paris-CARLA-3D is a dataset of several dense colored point clouds of outdoor environments built by a mobile LiDAR and camera system. The data are composed of two sets with synthetic data from the open source CARLA simulator (700 million points) and real data acquired in the city of Paris (60 million points), hence the name Paris-CARLA-3D. One of the advantages of this dataset is to have simulated the same LiDAR and camera platform in the open source CARLA simulator as the one used to produce the real data. In addition, manual annotation of the classes using the semantic tags of CARLA was performed on the real data, allowing the testing of transfer methods from the synthetic to the real data. The objective of this dataset is to provide a challenging dataset to evaluate and improve methods on difficult vision tasks for the 3D mapping of outdoor environments: semantic segmentation, instance segmentation, and scene completion. For each task, we describe the evaluation protocol as well as the experiments carried out to establish a baseline.


2020 ◽  
Vol 224 (3) ◽  
pp. 1505-1522
Author(s):  
Saeed Parnow ◽  
Behrooz Oskooi ◽  
Giovanni Florio

SUMMARY We define a two-step procedure to obtain reliable inverse models of the distribution of electrical conductivity at depth from apparent conductivities estimated by electromagnetic instruments such as GEONICS EM38, EM31 or EM 34-3. The first step of our procedure consists in the correction of the apparent conductivities to make them consistent with a low induction number condition, for which these data are very similar to the true conductivity. Then, we use a linear inversion approach to obtain a conductivity model. To improve the conductivity estimation at depth we introduced a depth-weighting function in our regularized weighted minimum length solution algorithm. We test the whole procedure on two synthetic data sets generated by the COMSOL Multiphysics for both the vertical magnetic dipole and horizontal magnetic dipole configurations of the loops. Our technique was also tested on a real data set, and the inversion result has been compared with the one obtained using the dipole-dipole DC electrical resistivity (ER) method. Our model not only reproduces all shallow conductive areas similar to the ER model, but also succeeds in replicating its deeper conductivity structures. On the contrary, inversion of uncorrected data provides a biased model underestimating the true conductivity.


Geophysics ◽  
1999 ◽  
Vol 64 (1) ◽  
pp. 48-60 ◽  
Author(s):  
Valéria C. F. Barbosa ◽  
João B. C. Silva ◽  
Walter E. Medeiros

Euler deconvolution has been widely used in automatic aeromagnetic interpretations because it requires no prior knowledge of the source magnetization direction and assumes no particular interpretation model, provided the structural index defining the anomaly falloff rate related to the nature of the magnetic source, is determined in advance. Estimating the correct structural index and electing optimum criteria for selecting candidate solutions are two fundamental requirements for a successful application of this method. We present a new criterion for determining the structural index. This criterion is based on the correlation between the total‐field anomaly and the estimates of an unknown base level. These estimates are obtained for each position of a moving data window along the observed profile and for several tentative values for the structural index. The tentative value for the structural index producing the smallest correlation is the best estimate of the correct structural index. We also propose a new criterion to select the best solutions from a set of previously computed candidate solutions, each one associated with a particular position of the moving data window. A current criterion is to select only those candidates producing a standard deviation for the vertical position of the source smaller than a threshold value. We propose that in addition to this criterion, only those candidates producing the best fit to the known quantities (combinations of anomaly and its gradients) be selected. The proposed modifications to Euler deconvolution can be implemented easily in an automated algorithm for locating the source position. The above results are grounded on a theoretical uniqueness and stability analysis, also presented in this paper, for the joint estimation of the source position, the base level, and the structural index in Euler deconvolution. This analysis also reveals that the vertical position and the structural index of the source cannot be estimated simultaneously because they are linearly dependent; the horizontal position and the structural index, on the other hand, are linearly independent. For a known structural index, estimates of both horizontal and vertical positions are unique and stable regardless of the value of the structural index. If this value is not too small, estimates of the base level for the total field are stable as well. The proposed modifications to Euler deconvolution were tested both on synthetic and real magnetic data. In the case of synthetic data, the proposed criterion always detected the correct structural index and good estimates of the source position were obtained, suggesting the present theoretical analysis may lead to a substantial enhancement in practical applications of Euler deconvolution. In the case of practical data (vertical component anomaly over an iron deposit in the Kursk district, Russia), the estimated structural index (corresponding to a vertical prism) was in accordance with the known geology of the deposit, and the estimates of the depth and horizontal position of the source compared favorably with results reported in the literature.


Geophysics ◽  
2016 ◽  
Vol 81 (1) ◽  
pp. W1-W12 ◽  
Author(s):  
Renato R. S. Dantas ◽  
Walter E. Medeiros

The key aspect limiting resolution in crosswell traveltime tomography is illumination, a well-known result but not well-exemplified. We have revisited resolution in the 2D case using a simple geometric approach based on the angular aperture distribution and the Radon transform properties. We have analytically found that if an isolated interface had dips contained in the angular aperture limits, it could be reconstructed using just one particular projection. By inversion of synthetic data, we found that a slowness field could be approximately reconstructed from a set of projections if the interfaces delimiting the slowness field had dips contained in the available angular apertures. On the one hand, isolated artifacts might be present when the dip is near the illumination limit. On the other hand, in the inverse sense, if an interface is interpretable from a tomogram, there is no guarantee that it corresponds to a true interface. Similarly, if a body is present in the interwell region, it is diffusely imaged, but its interfaces, particularly vertical edges, cannot be resolved and additional artifacts might be present. Again, in the inverse sense, there is no guarantee that an isolated anomaly corresponds to a true anomalous body, because this anomaly could be an artifact. These results are typical of ill-posed inverse problems: an absence of a guarantee of correspondence to the true distribution. The limitations due to illumination may not be solved by the use of constraints. Crosswell tomograms derived with the use of sparsity constraints, using the discrete cosine transform and Daubechies bases, essentially reproduce the same features seen in tomograms obtained with the smoothness constraint. Interpretation must be done taking into consideration a priori information and the particular limitations due to illumination, as we have determined with a real data case.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2021 ◽  
Vol 40 (3) ◽  
pp. 1-12
Author(s):  
Hao Zhang ◽  
Yuxiao Zhou ◽  
Yifei Tian ◽  
Jun-Hai Yong ◽  
Feng Xu

Reconstructing hand-object interactions is a challenging task due to strong occlusions and complex motions. This article proposes a real-time system that uses a single depth stream to simultaneously reconstruct hand poses, object shape, and rigid/non-rigid motions. To achieve this, we first train a joint learning network to segment the hand and object in a depth image, and to predict the 3D keypoints of the hand. With most layers shared by the two tasks, computation cost is saved for the real-time performance. A hybrid dataset is constructed here to train the network with real data (to learn real-world distributions) and synthetic data (to cover variations of objects, motions, and viewpoints). Next, the depth of the two targets and the keypoints are used in a uniform optimization to reconstruct the interacting motions. Benefitting from a novel tangential contact constraint, the system not only solves the remaining ambiguities but also keeps the real-time performance. Experiments show that our system handles different hand and object shapes, various interactive motions, and moving cameras.


Sign in / Sign up

Export Citation Format

Share Document