THE USE OF CRACOVIAN COMPUTATION IN ESTIMATING THE REGIONAL GRAVITY

Geophysics ◽  
1959 ◽  
Vol 24 (3) ◽  
pp. 465-478 ◽  
Author(s):  
Zbigniew Fajklewicz

The author uses the method of least squares in cracovian form and second order polynomials for estimating the regional gravity field. Expressions yielding the regional field are obtained very rapidly by using the inverse cracovians of the coefficients as given in the present paper, and there is no need of electronic digital computers for the computation. The equivalent of the entire work done by a computer of this kind in constructing the formula of the regional field, when effected by this method, takes no more than 20 minutes. The method is exemplified by the treatment of two gravity anomalies from the territory of Poland. The author stresses the fact that electronic computers adapted to the use of cracovians and characterized by a very high versatility may be applied in the method.

2021 ◽  
Author(s):  
Mirko Scheinert ◽  
Philipp Zingerle ◽  
Theresa Schaller ◽  
Roland Pail ◽  
Martin Willberg

<p>In the frame of the IAG Subcommission 2.4f “Gravity and Geoid in Antarctica” (AntGG) a first Antarctic-wide grid of ground-based gravity anomalies was released in 2016 (Scheinert et al. 2016). That data set was provided with a grid space of 10 km and covered about 73% of the Antarctic continent. Since then a considerably amount of new data has been made available, mainly collected by means of airborne gravimetry. Regions which were formerly void of any terrestrial gravity observations and have now been surveyed include especially the polar data gap originating from GOCE satellite gravimetry. Thus, it is timely to come up with an updated and enhanced regional gravity field solution for Antarctica. For this, we aim to improve further aspects in comparison to the AntGG 2016 solution: The grid spacing will be enhanced to 5 km. Instead of providing gravity anomalies only for parts of Antarctica, now the entire continent should be covered. In addition to the gravity anomaly also a regional geoid solution should be provided along with further desirable functionals (e.g. gravity anomaly vs. disturbance, different height levels).</p><p>We will discuss the expanded AntGG data base which now includes terrestrial gravity data from Antarctic surveys conducted over the past 40 years. The methodology applied in the analysis is based on the remove-compute-restore technique. Here we utilize the newly developed combined spherical-harmonic gravity field model SATOP1 (Zingerle et al. 2019) which is based on the global satellite-only model GOCO05s and the high-resolution topographic model EARTH2014. We will demonstrate the feasibility to adequately reduce the original gravity data and, thus, to also cross-validate and evaluate the accuracy of the data especially where different data set overlap. For the compute step the recently developed partition-enhanced least-squares collocation (PE-LSC) has been used (Zingerle et al. 2021, in review; cf. the contribution of Zingerle et al. in the same session). This method allows to treat all data available in Antarctica in one single computation step in an efficient and fast way. Thus, it becomes feasible to iterate the computations within short time once any input data or parameters are changed, and to easily predict the desirable functionals also in regions void of terrestrial measurements as well as at any height level (e.g. gravity anomalies at the surface or gravity disturbances at constant height).</p><p>We will discuss the results and give an outlook on the data products which shall be finally provided to present the new regional gravity field solution for Antarctica. Furthermore, implications for further applications will be discussed e.g. with respect to geophysical modelling of the Earth’s interior (cf. the contribution of Schaller et al. in session G4.3).</p>


Geophysics ◽  
1962 ◽  
Vol 27 (5) ◽  
pp. 616-626 ◽  
Author(s):  
F. S. Grant ◽  
A. F. Elsaharty

The principle of density profiling as a means of determining Bouguer densities is studied with a view to extending it to include all of the data in a survey. It is regarded as an endeavor to minimize the correlation between local gravity anomalies and topography, and as such it can be handled mathematically by the method of least squares. In the general case this leads to a variable Bouguer density which can be mapped and contoured. In a worked example, the correspondence between this function and the known geology appears to be good, and indicates that Bouguer density variations due to changing surface conditions can be used routinely in the reduction of gravity data.


2020 ◽  
Author(s):  
Philipp Zingerle ◽  
Roland Pail ◽  
Thomas Gruber

<p>Within this contribution we present the new experimental combined global gravity field model XGM2020. Key feature of this model is the rigorous combination of the latest GOCO06s satellite-only model with global terrestrial gravity anomalies on normal equation level, up to d/o 2159, using individual observation weights. To provide a maximum resolution, the model is further extended to d/o 5400 by applying block diagonal techniques.</p><p>To attain the high resolution, the incorporated terrestrial dataset is composed of three different data sources: Over land 15´ gravity anomalies (by courtesy of NGA) are augmented with topographic information, and over the oceans gravity anomalies derived from altimetry are used.  Corresponding normal equations are computed from these data sets either as full or as block diagonal systems.</p><p>Special emphasis is given to the novel processing techniques needed for very high-resolution gravity field modelling. As such the spheroidal harmonics play a central role, as well as the stable calculation of associated Legendre polynomials up to very high d/o. Also, a new technique for the optimal low-pass filtering of terrestrial gravity datasets is presented.</p><p>On the computational side, solving dense normal equation systems up to d/o 2159 means dealing with matrices of the size of about 158TB. Handling with matrices of such a size is very demanding, even for today’s largest supercomputers. Thus, sophisticated parallelized algorithms with focus on load balancing are crucial for a successful and efficient calculation.</p>


2013 ◽  
Vol 756-759 ◽  
pp. 4349-4352
Author(s):  
Ai Long Fan

The method of least squares and second-order prediction model are applied to carry out the compensation for weighing error of weighing sensors and static nonlinear error of transmitters and other hardwares in concrete mixing plant , so as to improve the measurement precision and expand the measurement range.The result indicates that the automatic compensation module for drop improves the measurement precision of weighing system.


Geophysics ◽  
1989 ◽  
Vol 54 (12) ◽  
pp. 1614-1621 ◽  
Author(s):  
E. M. Abdelrahman ◽  
A. I. Bayoumi ◽  
Y. E. Abdelhady ◽  
M. M. Gobashy ◽  
H. M. El‐Araby

The correlation factors between successive least‐squares residual (or regional) gravity anomalies from a buried sphere, a two‐dimensional (2‐D) horizontal cylinder, and a vertical cylinder and the first horizontal derivative of the gravity from a 2‐D thin faulted layer are computed. Correlation values are used to determine the depth to the center of the buried structure, and the radius of the sphere or the cylinder and the thickness of the fault are estimated. The method can be applied not only to residuals but also to the Bouguer‐anomaly profile consisting of the combined effect of a residual component due to a purely local structure and a regional component represented by a polynomial of any order. The method is easy to apply and may be automated if desired. It can also be applied to the derivative anomalies of the gravity field. The validity of the method is tested on two field examples from the United States and Denmark.


1964 ◽  
Vol 18 (5) ◽  
pp. 352-372 ◽  
Author(s):  
G. H. Schut

This paper gives first a general formulation of the block adjustment of coordinates by the method of least squares. The formulation is then simplified on the ground of practical considerations. Suitable transformation formulae are given for block adjustment of strips, of sections, and of models, and the most practical ways of solving the resulting normal equations are discussed. It is shown that the adjustments can be performed on electronic computers with a storage capacity of 4,000 and even of 2,000 words. Finally, some alternative methods are discussed.


2008 ◽  
Vol 30 (4) ◽  
Author(s):  
Pham Chi Vinh ◽  
Peter G. Malischewsky

In the present paper we derive improved approximations for the Rayleigh wave velocity in the interval \(\nu  \in \) [−1, 0.5] using the method of least squares. In particular: (i) We create approximate polynomials of order 4, 5, 6 whose maximum percentage errors are 0.035 %, 0.015 %, 0.0083 %, respectively. (2i) Improved approximations in the form of the inverse of polynomials of order 3, 5 are also established. They are approximations with very high accuracy. (3i) By using the best approximate second-order polynomial of the cubic power in the space \(C\)[0.474572, 0.912622], we derive an approximation that is the best, so far, of the approximations obtained by approximating the secular equation.


Author(s):  
Hany Mahbuby ◽  
Yazdan Amerian ◽  
Amirhossein Nikoofard ◽  
Mehdi Eshagh

AbstractThe gravity field is a signature of the mass distribution and interior structure of the Earth, in addition to all its geodetic applications especially geoid determination and vertical datum unification. Determination of a regional gravity field model is an important subject and needs to be investigated and developed. Here, the spherical radial basis functions (SBFs) are applied in two scenarios for this purpose: interpolating the gravity anomalies and solving the fundamental equation of physical geodesy for geoid or disturbing potential determination, which has the possibility of being verified by the Global Navigation Satellite Systems (GNSS)/levelling data. Proper selections of the number of SBFs and optimal location of the applied SBFs are important factors to increase the accuracy of estimation. In this study, the gravity anomaly interpolation based on the SBFs is performed by Gauss-Newton optimisation with truncated singular value decomposition, and a Quasi-Newton method based on line search to solve the minimisation problems with a small number of iterations is developed. In order to solve the fundamental equation of physical geodesy by the SBFs, the truncated Newton optimisation is applied as the Hessian matrix of the objective function is not always positive definite. These two scenarios are applied on the terrestrial free-air gravity anomalies over the topographically rough area of Auvergne. The obtained accuracy for the interpolated gravity anomaly model is 1.7 mGal with the number of point-masses about 30% of the number of observations, and 1.5 mGal in the second scenario where the number of used kernels is also 30%. These accuracies are root mean square errors (RMSE) of the differences between predicted and observed gravity anomalies at check points. Moreover, utilising the optimal constructed model from the second scenario, the RMSE of 9 cm is achieved for the differences between the gravimetric height anomalies derived from the model and the geometric height anomalies from GNSS/levelling points.


Sign in / Sign up

Export Citation Format

Share Document