scholarly journals Gravity Data Inversion with Method of Local Corrections for Finite Elements Models

Geosciences ◽  
2018 ◽  
Vol 8 (10) ◽  
pp. 373 ◽  
Author(s):  
Petr Martyshko ◽  
Igor Ladovskii ◽  
Denis Byzov ◽  
Alexander Tsidaev

We present a new method for gravity data inversion for the linear problem (reconstruction of density distribution by given gravity field). This is an iteration algorithm based on the ideas of local minimization (also known as local corrections method). Unlike the gradient methods, it does not require a nonlinear minimization, is easier to implement and has better stability. The algorithm is based on the finite element method. The finite element approach in our study means that the medium (part of a lithosphere) is represented as a set of equal rectangular prisms, each with constant density. We also suggest a time-efficient optimization, which speeds up the inversion process. This optimization is applied on the gravity field calculation stage, which is a part of every inversion iteration. Its idea is to replace multiple calculations of the gravity field for all finite elements in all observation points with a pre-calculated set of uniform fields for all distances between finite element and observation point, which is possible for the current data set. Method is demonstrated on synthetic data and real-world cases. The case study area is located on the Timan-Pechora plate. This region is one of the promising oil- and gas-producing areas in Russia. Note that in this case we create a 3D density model using joint interpretation of seismic and gravity data.

2021 ◽  
Author(s):  
Mirko Scheinert ◽  
Philipp Zingerle ◽  
Theresa Schaller ◽  
Roland Pail ◽  
Martin Willberg

<p>In the frame of the IAG Subcommission 2.4f “Gravity and Geoid in Antarctica” (AntGG) a first Antarctic-wide grid of ground-based gravity anomalies was released in 2016 (Scheinert et al. 2016). That data set was provided with a grid space of 10 km and covered about 73% of the Antarctic continent. Since then a considerably amount of new data has been made available, mainly collected by means of airborne gravimetry. Regions which were formerly void of any terrestrial gravity observations and have now been surveyed include especially the polar data gap originating from GOCE satellite gravimetry. Thus, it is timely to come up with an updated and enhanced regional gravity field solution for Antarctica. For this, we aim to improve further aspects in comparison to the AntGG 2016 solution: The grid spacing will be enhanced to 5 km. Instead of providing gravity anomalies only for parts of Antarctica, now the entire continent should be covered. In addition to the gravity anomaly also a regional geoid solution should be provided along with further desirable functionals (e.g. gravity anomaly vs. disturbance, different height levels).</p><p>We will discuss the expanded AntGG data base which now includes terrestrial gravity data from Antarctic surveys conducted over the past 40 years. The methodology applied in the analysis is based on the remove-compute-restore technique. Here we utilize the newly developed combined spherical-harmonic gravity field model SATOP1 (Zingerle et al. 2019) which is based on the global satellite-only model GOCO05s and the high-resolution topographic model EARTH2014. We will demonstrate the feasibility to adequately reduce the original gravity data and, thus, to also cross-validate and evaluate the accuracy of the data especially where different data set overlap. For the compute step the recently developed partition-enhanced least-squares collocation (PE-LSC) has been used (Zingerle et al. 2021, in review; cf. the contribution of Zingerle et al. in the same session). This method allows to treat all data available in Antarctica in one single computation step in an efficient and fast way. Thus, it becomes feasible to iterate the computations within short time once any input data or parameters are changed, and to easily predict the desirable functionals also in regions void of terrestrial measurements as well as at any height level (e.g. gravity anomalies at the surface or gravity disturbances at constant height).</p><p>We will discuss the results and give an outlook on the data products which shall be finally provided to present the new regional gravity field solution for Antarctica. Furthermore, implications for further applications will be discussed e.g. with respect to geophysical modelling of the Earth’s interior (cf. the contribution of Schaller et al. in session G4.3).</p>


2003 ◽  
Vol 40 (10) ◽  
pp. 1307-1320 ◽  
Author(s):  
B Nitescu ◽  
A R Cruden ◽  
R C Bailey

The Moho undulations beneath the western part of the Archean Superior Province have been investigated with a three-dimensional gravity inversion algorithm for a single interface of constant density contrast. Inversion of the complete gravity data set produces unreal effects in the solution due to the ambiguity in the possible sources of some crustal gravity anomalies. To avoid these effects a censored gravity data set was used instead. The inversion results are consistent with reflection and refraction seismic data from the region and, therefore, provide a basis for the lateral correlation of the Moho topography between parallel seismic lines. The results indicate the existence of a major linear east–west-trending rise of the Moho below the metasedimentary English River subprovince, which is paralleled by crustal roots below the granite–greenstone Uchi and Wabigoon subprovinces. This correlation between the subprovincial structure at the surface and deep Moho undulations suggests that the topography of the crust–mantle boundary is related to the tectonic evolution of the Western Superior belts. Although certain features of the crust–mantle boundary are likely inherited from the accretionary and collisional stages of the Western Superior craton, gravity-driven processes triggered by subsequent magmatism and crustal softening may have played a role in both the preservation of those features, as well as in the development of new ones.


Author(s):  
S Mahesh ◽  
Schiffel Marco ◽  
Ramesh S Sharma ◽  
MK Praveenkumar ◽  
Vishal Wadagavi ◽  
...  

Industries are always looking for an effective and efficient way to reduce the computation time of simulation because of the huge expenditure involved. From basics of Finite Element Method (FEM), it is known that, linear order finite elements consume less computation time and are less accurate compared to higher order finite elements say quadratic elements. An approach to get the benefit of less computation cost of linear elements and the good accuracy of quadratic elements can be of a good thought. The methodology to get the accurate results of quadratic elements with the advantage of less simulation run time of linear elements is presented here. Machine Learning (ML) algorithms are found to be effective in making predictions based on some known data set. The present paper discusses a methodology to implement ML model to predict the results equivalent to that of quadratic elements based on the solutions obtained from the linear elements. Here, a ML model is developed using python code, the stress results from Finite Element (FE) model of linear tetrahedral elements is given as the input to it to predict the stress results of quadratic tetrahedral elements. Abaqus is used as the FEM tool to develop the FE models. A python script is used to extract the stresses and the corresponding node numbers. The results showed that the developed ML model is successful in prediction of the accurate stress results for the set of test data. The scatter plots showed that the Z-score method was effective in removing the singularities. The proposed methodology is effective to reduce the computation time for simulation.


Mathematics ◽  
2021 ◽  
Vol 9 (22) ◽  
pp. 2966
Author(s):  
Petr Martyshko ◽  
Igor Ladovskii ◽  
Denis Byzov

The paper describes a method of gravity data inversion, which is based on parallel algorithms. The choice of the density model of the initial approximation and the set on which the solution is sought guarantees the stability of the algorithms. We offer a new upward and downward continuation algorithm for separating the effects of shallow and deep sources. Using separated field of layers, the density distribution is restored in a form of 3D grid. We use the iterative parallel algorithms for the downward continuation and restoration of the density values (by solving the inverse linear gravity problem). The algorithms are based on the ideas of local minimization; they do not require a nonlinear minimization; they are easier to implement and have better stability. We also suggest an optimization of the gravity field calculation, which speeds up the inversion. A practical example of interpretation is presented for the gravity data of the Urals region, Russia.


2019 ◽  
Vol 9 (1) ◽  
pp. 71-86 ◽  
Author(s):  
T. Gruber ◽  
M. Willberg

Abstract The signal content and error level of recent GOCE-based high resolution gravity field models is assessed by means of signal degree variances and comparisons to independent GNSS-levelling geoid heights. The signal of the spherical harmonic series of these models is compared to the pre-GOCE EGM2008 model in order to identify the impact of GOCE data, of improved surface and altimetric gravity data and of modelling approaches. Results of the signal analysis show that in a global average roughly 80% of the differences are due to the inclusion of GOCE satellite information, while the remaining 20% are contributed by improved surface data. Comparisons of the global models to GNSS-levelling derived geoid heights demonstrate that a 1 cm geoid from the global model is feasible, if there is a high quality terrestrial gravity data set available. For areas with less good coverage an accuracy of several centimetres to a decimetre is feasible taking into account that GOCE provides now the geoid with a centimetre accuracy at spatial scales of 80 to 100 km. Comparisons with GNSS-levelling geoid heights also are a good tool to investigate possible systematic errors in the global models, in the spirit levelling and in the GNSS height observations. By means of geoid height differences and geoid slope differences one can draw conclusions for each regional data set separately. These conclusions need to be considered for a refined analysis e.g. to eliminate suspicious GNSS-levelling data, to improve the global modelling by using full variance-covariance matrices and by consistently weighting the various data sources used for high resolution gravity field models. The paper describes the applied procedures, shows results for these geoid height and geoid slope differences for some regional data sets and draws conclusions about possible error sources and future work to be done in this context.


2013 ◽  
Vol 5 (2) ◽  
pp. 1871-1899
Author(s):  
E. Gurria ◽  
C. López

Abstract. The Grace satellite pair has been in operation since March 2002 providing monthly gravity potential solutions. This data set contains the variation of the gravity potential as a function of time however its use is limited by the presence of vertical striping noise which overwhelms the time variable signal. Several sophisticated filters exist to extract the time variable signal from the noise however they are seldom used as these filters are complex and difficult to implement. Consequently a large proportion of users of time variable Grace data use a conventional Spherical Gaussian Filter with a large smoothing radius of 600–1000 km which greatly attenuates the vertical striping noise however it also attenuates the remaining signal significantly. The difficulty in removing the noise is that the vertical striping noise is not band limited. We have studied the nature of the vertical striping noise and have found that it occurs over all harmonic degrees however it is associated only with the high harmonic orders. We also find that it occurs only in the east–west and radial components of the gravity field and that the noise is much greater than the signal in these two components. Further we observe that these two components are very similar at all geographic latitudes and that by performing a phase shift and subtracting one component from the other, one obtains a noise free signal. We use this procedure to define a new filter which we call the Sawtooth Filter and find that this filter offers three interesting properties: (i) it subtracts the vertical striping noise from the time variable signal (ii) it amplifies the higher degree harmonics thus improving the spatial resolution (iii) it is simpler to implement and use than the Spherical Gaussian Filter.


Author(s):  
K. Komeza ◽  
S. Wiak

This paper deals with the field and leakage reactance calculations in the model leakage transformer. The approximate solution for 3‐D problem, made by composing 2‐D solutions for 3‐D solution, is applied. Hermitian hierarchical finite elements have been successfully applied to the field and reactance computation of the transformer. The computational results have been reported and compared with measurement giving the error not greater than 10%.


Symmetry ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 39
Author(s):  
Łukasz Warguła ◽  
Dominik Wojtkowiak ◽  
Mateusz Kukla ◽  
Krzysztof Talaśka

This article presents the results of experimental research on the mechanical properties of pine wood (Pinus L. Sp. Pl. 1000. 1753). In the course of the research process, stress-strain curves were determined for cases of tensile, compression and shear of standardized shapes samples. The collected data set was used to determine several material constants such as: modulus of elasticity, shear modulus or yield point. The aim of the research was to determine the material properties necessary to develop the model used in the finite element analysis (FEM), which demonstrates the symmetrical nature of the stress distribution in the sample. This model will be used to analyze the process of grinding wood base materials in terms of the peak cutting force estimation and the tool geometry influence determination. The main purpose of the developed model will be to determine the maximum stress value necessary to estimate the destructive force for the tested wood sample. The tests were carried out for timber of around 8.74% and 19.9% moisture content (MC). Significant differences were found between the mechanical properties of wood depending on moisture content and the direction of the applied force depending on the arrangement of wood fibers. Unlike other studies in the literature, this one relates to all three stress states (tensile, compression and shear) in all significant directions (anatomical). To verify the usability of the determined mechanical parameters of wood, all three strength tests (tensile, compression and shear) were mapped in the FEM analysis. The accuracy of the model in determining the maximum destructive force of the material is equal to the average 8% (for tensile testing 14%, compression 2.5%, shear 6.5%), while the average coverage of the FEM characteristic with the results of the strength test in the field of elastic-plastic deformations with the adopted ±15% error overlap on average by about 77%. The analyses were performed in the ABAQUS/Standard 2020 program in the field of elastic-plastic deformations. Research with the use of numerical models after extension with a damage model will enable the design of energy-saving and durable grinding machines.


2020 ◽  
Vol 20 (4) ◽  
pp. 799-813
Author(s):  
Joël Chaskalovic ◽  
Franck Assous

AbstractThe aim of this paper is to provide a new perspective on finite element accuracy. Starting from a geometrical reading of the Bramble–Hilbert lemma, we recall the two probabilistic laws we got in previous works that estimate the relative accuracy, considered as a random variable, between two finite elements {P_{k}} and {P_{m}} ({k<m}). Then we analyze the asymptotic relation between these two probabilistic laws when the difference {m-k} goes to infinity. New insights which qualify the relative accuracy in the case of high order finite elements are also obtained.


Sign in / Sign up

Export Citation Format

Share Document