scholarly journals Unbiased least-squares modification of Stokes’ formula

2020 ◽  
Vol 94 (9) ◽  
Author(s):  
Lars E. Sjöberg

Abstract As the KTH method for geoid determination by combining Stokes integration of gravity data in a spherical cap around the computation point and a series of spherical harmonics suffers from a bias due to truncation of the data sets, this method is based on minimizing the global mean square error (MSE) of the estimator. However, if the harmonic series is increased to a sufficiently high degree, the truncation error can be considered as negligible, and the optimization based on the local variance of the geoid estimator makes fair sense. Such unbiased types of estimators, derived in this article, have the advantage to the MSE solutions not to rely on the imperfectly known gravity signal degree variances, but only the local error covariance matrices of the observables come to play. Obviously, the geoid solution defined by the local least variance is generally superior to the solution based on the global MSE. It is also shown, at least theoretically, that the unbiased geoid solutions based on the KTH method and remove–compute–restore technique with modification of Stokes formula are the same.

2021 ◽  
Vol 95 (2) ◽  
Author(s):  
Mirjam Bilker-Koivula ◽  
Jaakko Mäkinen ◽  
Hannu Ruotsalainen ◽  
Jyri Näränen ◽  
Timo Saari

AbstractPostglacial rebound in Fennoscandia causes striking trends in gravity measurements of the area. We present time series of absolute gravity data collected between 1976 and 2019 on 12 stations in Finland with different types of instruments. First, we determine the trends at each station and analyse the effect of the instrument types. We estimate, for example, an offset of 6.8 μgal for the JILAg-5 instrument with respect to the FG5-type instruments. Applying the offsets in the trend analysis strengthens the trends being in good agreement with the NKG2016LU_gdot model of gravity change. Trends of seven stations were found robust and were used to analyse the stabilization of the trends in time and to determine the relationship between gravity change rates and land uplift rates as measured with global navigation satellite systems (GNSS) as well as from the NKG2016LU_abs land uplift model. Trends calculated from combined and offset-corrected measurements of JILAg-5- and FG5-type instruments stabilized in 15 to 20 years and at some stations even faster. The trends of FG5-type instrument data alone stabilized generally within 10 years. The ratio between gravity change rates and vertical rates from different data sets yields values between − 0.206 ± 0.017 and − 0.227 ± 0.024 µGal/mm and axis intercept values between 0.248 ± 0.089 and 0.335 ± 0.136 µGal/yr. These values are larger than previous estimates for Fennoscandia.


2020 ◽  
Vol 221 (3) ◽  
pp. 1542-1554 ◽  
Author(s):  
B C Root

SUMMARY Current seismic tomography models show a complex environment underneath the crust, corroborated by high-precision satellite gravity observations. Both data sets are used to independently explore the density structure of the upper mantle. However, combining these two data sets proves to be challenging. The gravity-data has an inherent insensitivity in the radial direction and seismic tomography has a heterogeneous data acquisition, resulting in smoothed tomography models with de-correlation between different models for the mid-to-small wavelength features. Therefore, this study aims to assess and quantify the effect of regularization on a seismic tomography model by exploiting the high lateral sensitivity of gravity data. Seismic tomography models, SL2013sv, SAVANI, SMEAN2 and S40RTS are compared to a gravity-based density model of the upper mantle. In order to obtain similar density solutions compared to the seismic-derived models, the gravity-based model needs to be smoothed with a Gaussian filter. Different smoothening characteristics are observed for the variety of seismic tomography models, relating to the regularization approach in the inversions. Various S40RTS models with similar seismic data but different regularization settings show that the smoothening effect is stronger with increasing regularization. The type of regularization has a dominant effect on the final tomography solution. To reduce the effect of regularization on the tomography models, an enhancement procedure is proposed. This enhancement should be performed within the spectral domain of the actual resolution of the seismic tomography model. The enhanced seismic tomography models show improved spatial correlation with each other and with the gravity-based model. The variation of the density anomalies have similar peak-to-peak magnitudes and clear correlation to geological structures. The resolvement of the spectral misalignment between tomographic models and gravity-based solutions is the first step in the improvement of multidata inversion studies of the upper mantle and benefit from the advantages in both data sets.


2012 ◽  
Vol 2 (1) ◽  
pp. 53-64 ◽  
Author(s):  
H. Yildiz ◽  
R. Forsberg ◽  
J. Ågren ◽  
C. Tscherning ◽  
L. Sjöberg

Comparison of remove-compute-restore and least squares modification of Stokes' formula techniques to quasi-geoid determination over the Auvergne test areaThe remove-compute-restore (RCR) technique for regional geoid determination implies that both topography and low-degree global geopotential model signals are removed before computation and restored after Stokes' integration or Least Squares Collocation (LSC) solution. The Least Squares Modification of Stokes' Formula (LSMS) technique not requiring gravity reductions is implemented here with a Residual Terrain Modelling based interpolation of gravity data. The 2-D Spherical Fast Fourier Transform (FFT) and the LSC methods applying the RCR technique and the LSMS method are tested over the Auvergne test area. All methods showed a reasonable agreement with GPS-levelling data, in the order of a 3-3.5 cm in the central region having relatively smooth topography, which is consistent with the accuracies of GPS and levelling. When a 1-parameter fit is used, the FFT method using kernel modification performs best with 3.0 cm r.m.s difference with GPS-levelling while the LSMS method gives the best agreement with GPS-levelling with 2.4 cm r.m.s after a 4-parameter fit is used. However, the quasi-geoid models derived using two techniques differed from each other up to 33 cm in the high mountains near the Alps. Comparison of quasi-geoid models with EGM2008 showed that the LSMS method agreed best in term of r.m.s.


2016 ◽  
Author(s):  
George Dimitriadis ◽  
Joana Neto ◽  
Adam R. Kampff

AbstractElectrophysiology is entering the era of ‘Big Data’. Multiple probes, each with hundreds to thousands of individual electrodes, are now capable of simultaneously recording from many brain regions. The major challenge confronting these new technologies is transforming the raw data into physiologically meaningful signals, i.e. single unit spikes. Sorting the spike events of individual neurons from a spatiotemporally dense sampling of the extracellular electric field is a problem that has attracted much attention [22, 23], but is still far from solved. Current methods still rely on human input and thus become unfeasible as the size of the data sets grow exponentially.Here we introduce the t-student stochastic neighbor embedding (t-sne) dimensionality reduction method [27] as a visualization tool in the spike sorting process. T-sne embeds the n-dimensional extracellular spikes (n = number of features by which each spike is decomposed) into a low (usually two) dimensional space. We show that such embeddings, even starting from different feature spaces, form obvious clusters of spikes that can be easily visualized and manually delineated with a high degree of precision. We propose that these clusters represent single units and test this assertion by applying our algorithm on labeled data sets both from hybrid [23] and paired juxtacellular/extracellular recordings [15]. We have released a graphical user interface (gui) written in python as a tool for the manual clustering of the t-sne embedded spikes and as a tool for an informed overview and fast manual curration of results from other clustering algorithms. Furthermore, the generated visualizations offer evidence in favor of the use of probes with higher density and smaller electrodes. They also graphically demonstrate the diverse nature of the sorting problem when spikes are recorded with different methods and arise from regions with different background spiking statistics.


2012 ◽  
Vol 19 (2) ◽  
pp. 291-296 ◽  
Author(s):  
M. Pilkington ◽  
P. Keating

Abstract. Most interpretive methods for potential field (magnetic and gravity) measurements require data in a gridded format. Many are also based on using fast Fourier transforms to improve their computational efficiency. As such, grids need to be full (no undefined values), rectangular and periodic. Since potential field surveys do not usually provide data sets in this form, grids must first be prepared to satisfy these three requirements before any interpretive method can be used. Here, we use a method for grid preparation based on a fractal model for predicting field values where necessary. Using fractal field values ensures that the statistical and spectral character of the measured data is preserved, and that unwanted discontinuities at survey boundaries are minimized. The fractal method compares well with standard extrapolation methods using gridding and maximum entropy filtering. The procedure is demonstrated on a portion of a recently flown aeromagnetic survey over a volcanic terrane in southern British Columbia, Canada.


2016 ◽  
Author(s):  
Godfred Osukuku ◽  
Abiud Masinde ◽  
Bernard Adero ◽  
Edmond Wanjala ◽  
John Ego

Abstract This research work attempts to map out the stratigraphic sequence of the Kerio Valley Basin using magnetic, gravity and seismic data sets. Regional gravity data consisting of isotactic, free-air and Bouguer anomaly grids were obtained from the International Gravity Bureau (BGI). Magnetic data sets were sourced from the Earth Magnetic Anomaly grid (EMAG2). The seismic reflection data was acquired in 1989 using a vibrating source shot into inline geophones. Gravity Isostacy data shows low gravity anomalies that depict a deeper basement. Magnetic tilt and seismic profiles show sediment thickness of 2.5-3.5 Km above the basement. The Kerio Valley Basin towards the western side is underlain by a deeper basement which are overlain by succession of sandstones/shales and volcanoes. At the very top are the mid Miocene phonolites (Uasin Gishu) underlain by mid Miocene sandstones/shales (Tambach Formation). There are high gravity anomalies in the western and southern parts of the basin with the sedimentation being constrained by two normal faults. The Kerio Valley Basin is bounded to the west by the North-South easterly dipping fault system. Gravity data was significantly of help in delineating the basement, scanning the lithosphere and the upper mantle according to the relative densities. The basement rocks as well as the upper cover of volcanoes have distinctively higher densities than the infilled sedimentary sections within the basin. From the seismic profiles, the frequency of the shaley rocks and compact sandstones increases with depths. The western side of the basin is characterized by the absence of reflections and relatively higher frequency content. The termination of reflectors and the westward dip of reflectors represent a fault (Elgeyo fault). The reflectors dip towards the west, marking the basin as an asymmetrical syncline, indicating that the extension was towards the east. The basin floor is characterized by a nearly vertical fault which runs parallel to the Elgeyo fault. The seismic reflectors show marked discontinuities which may be due to lava flows. The deepest reflector shows deep sedimentation in the basin and is in reasonable agreement with basement depths delineated from potential methods (gravity and magnetic). Basement rocks are deeper at the top of the uplift footwall of the Elgeyo Escarpment. The sediments are likely of a thickness of about 800 M which is an interbed of sandstones and shales above the basement.


2021 ◽  
Author(s):  
Yan Ming Wang ◽  
Xiaopeng Li ◽  
Kevin Ahlgren ◽  
Jordan Krcmaric ◽  
Ryan Hardy ◽  
...  

<p>For the upcoming North American-Pacific Geopotential Datum of 2022, the National Geodetic Survey (NGS), the Canadian Geodetic Survey (CGS) and the National Institute of Statistics and Geography of Mexico (INEGI) computed the first joint experimental gravimetric geoid model (xGEOID) on 1’x1’ grids that covers a region bordered by latitude 0 to 85 degree, longitude 180 to 350 degree east. xGEOID20 models are computed using terrestrial gravity data, the latest satellite gravity model GOCO06S, altimetric gravity data DTU15, and an additional nine airborne gravity blocks of the GRAV-D project, for a total of 63 blocks. In addition, a digital elevation model in a 3” grid was produced by combining MERIT, TanDEM-X, and USGS-NED and used for the topographic/gravimetric reductions. The geoid models computed from the height anomalies (NGS) and from the Helmert-Stokes scheme (CGS) were combined using two different weighting schemes, then evaluated against the independent GPS/leveling data sets. The models perform in a very similar way, and the geoid comparisons with the most accurate Geoid Slope Validation Surveys (GSVS) from 2011, 2014 and 2017 indicate that the relative geoid accuracy could be around 1-2 cm baseline lengths up to 300 km for these GSVS lines in the United States. The xGEOID20 A/B models were selected from the combined models based on the validation results. The geoid accuracies were also estimated using the forward modeling.</p>


Geophysics ◽  
2017 ◽  
Vol 82 (1) ◽  
pp. G1-G21 ◽  
Author(s):  
William J. Titus ◽  
Sarah J. Titus ◽  
Joshua R. Davis

We apply a Bayesian Markov chain Monte Carlo formalism to the gravity inversion of a single localized 2D subsurface object. The object is modeled as a polygon described by five parameters: the number of vertices, a density contrast, a shape-limiting factor, and the width and depth of an encompassing container. We first constrain these parameters with an interactive forward model and explicit geologic information. Then, we generate an approximate probability distribution of polygons for a given set of parameter values. From these, we determine statistical distributions such as the variance between the observed and model fields, the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the subsurface object). We introduce replica exchange to mitigate trapping in local optima and to compute model probabilities and their uncertainties. We apply our techniques to synthetic data sets and a natural data set collected across the Rio Grande Gorge Bridge in New Mexico. On the basis of our examples, we find that the occupancy probability is useful in visualizing the results, giving a “hazy” cross section of the object. We also find that the role of the container is important in making predictions about the subsurface object.


Author(s):  
KASPAR RIESEN ◽  
HORST BUNKE

Graphs provide us with a powerful and flexible representation formalism for pattern classification. Many classification algorithms have been proposed in the literature. However, the vast majority of these algorithms rely on vectorial data descriptions and cannot directly be applied to graphs. Recently, a growing interest in graph kernel methods can be observed. Graph kernels aim at bridging the gap between the high representational power and flexibility of graphs and the large amount of algorithms available for object representations in terms of feature vectors. In the present paper, we propose an approach transforming graphs into n-dimensional real vectors by means of prototype selection and graph edit distance computation. This approach allows one to build graph kernels in a straightforward way. It is not only applicable to graphs, but also to other kind of symbolic data in conjunction with any kind of dissimilarity measure. Thus it is characterized by a high degree of flexibility. With several experimental results, we prove the robustness and flexibility of our new method and show that our approach outperforms other graph classification methods on several graph data sets of diverse nature.


Sign in / Sign up

Export Citation Format

Share Document