gnss position time series
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 12)

H-INDEX

4
(FIVE YEARS 2)

2021 ◽  
Vol 1 (2) ◽  
Author(s):  
Dinh Trong TRAN ◽  
Quoc Long NGUYEN ◽  
Dinh Huy NGUYEN

In processing of position time series of crustal deformation monitoring stations by continuousGNSS station, it is very important to determine the motion model to accurately determine the displacementvelocity and other movements in the time series. This paper proposes (1) the general geometric model foranalyzing GNSS position time series, including common phenomena such as linear trend, seasonal term,jumps, and post-seismic deformation; and (2) the approach for directly estimating time decay ofpostseismic deformations from GNSS position time series, which normally is determined based on seismicmodels or the physical process seismicity, etc. This model and approach are tested by synthetic positiontime series, of which the calculation results show that the estimated parameters are equal to the givenparameters. In addition they were also used to process the real data which is GNSS position time series of4 CORS stations in Vietnam, then the estimated velocity of these stations: DANA (n, e, u = -9.5, 31.5, 1.5mm/year), HCMC (n, e, u = -9.5, 26.2, 1.9 mm/year), NADI (n, e, u = -10.6, 31.5, -13.4 mm/year), andNAVI (n, e, u = -13.9, 32.8, -1.1 mm/year) is similar to previous studies.


2021 ◽  
Author(s):  
Shambo Bhattacharjee ◽  
Alvaro Santamaría-Gómez

<p>Long GNSS position time series contain offsets typically at rates between 1 and 3 offsets per decade. We may classify the offsets whether their epoch is precisely known, from GNSS station log files or Earthquake databases, or unknown. Very often, GNSS position time series contain offsets for which the epoch is not known a priori and, therefore, an offset detection/removal operation needs to be done in order to produce continuous position time series needed for many applications in geodesy and geophysics. A further classification of the offsets corresponds to those having a physical origin related to the instantaneous displacement of the GNSS antenna phase center (from Earthquakes, antenna changes or even changes of the environment of the antenna) and those spurious originated from the offset detection method being used (manual/supervised or automatic/unsupervised). Offsets due to changes of the antenna and its environment must be avoided by the station operators as much as possible. Spurious offsets due to the detection method must be avoided by the time series analyst and are the focus of this work.</p><p><br>Even if manual offset detection by expert analysis is likely to perform better, automatic offset detection algorithms are extremely useful when using massive (thousands) GNSS time series sets. Change point detection and cluster analysis algorithms can be used for detecting offsets in a GNSS time series data and R offers a number of libraries related to performing these two. For example, the “Bayesian Analysis of Change Point Problems” or the “bcp” helps to detect change points in a time series data. Similarly, the “dtwclust” (Dynamic Time Warping algorithm) is used for the time series cluster analysis. Our objective is to assess various open-source R libraries for the automatic offset detection.</p>


2021 ◽  
Author(s):  
Kevin Gobron ◽  
Paul Rebischung ◽  
Olivier de Viron ◽  
Michel Van Camp ◽  
Alain Demoulin

<p>Over the past two decades, numerous studies demonstrated that the stochastic variability in GNSS position time series – often referred to as noise – is both temporally and spatially correlated. The time correlation of this stochastic variability can be well approximated by a linear combination of white noise and power-law stochastic processes with different amplitudes. Although acknowledged in many geodetic studies, the presence of such power-law processes in GNSS position time series remains largely unexplained. Considering that these power-law processes are the primary source of uncertainty for velocity estimates, it is crucial to identify their origin(s) and to try to reduce their influence on position time series.</p><p> </p><p>Using the Least-Squares Variance Component Estimation method, we analysed the influence of removing surface mass loading deformation on the stochastic properties of vertical land motion time series (VLMs). We used the position time series of over 10,000 globally distributed GNSS stations processed by the Nevada Geodetic Laboratory at the University of Nevada, Reno, and loading deformation time series computed by the Earth System Modelling (ESM) team at GFZ-Potsdam. Our results show that the values of stochastic parameters, namely, white noise amplitude, spectral index, and power-law noise amplitude, but also the spatial correlation, are systematically influenced by non-tidal atmospheric and oceanic loading deformation. The observed change in stochastic parameters often translates into a reduction of trend uncertainties, reaching up to -75% when non-tidal atmospheric and oceanic loading deformation is highest.</p>


2020 ◽  
Vol 224 (1) ◽  
pp. 257-270
Author(s):  
S M Khazraei ◽  
A R Amiri-Simkooei

SUMMARY It is well known that unmodelled offsets in Global Navigation Satellite System (GNSS) position time-series can introduce biases into the station velocities. Although large offsets are usually reported or can be visually detected, automated offset detection algorithms require further investigation. This problem is still challenging as (small) geophysical offsets are usually covered by coloured noise and remain undetected. An offset detection algorithm has recently been proposed, which can detect and estimate offsets in both univariate and multivariate analyses. Although efficient in truly detecting offsets, this method still suffers from a high rate of detected fake offsets. To improve the offset detection performance, we attempt to stabilize the offset power spectrum to reduce the number of false detections. The spline function theory is adopted in the smoothness process of the power spectrum. The algorithm modified using the spline functions, referred to as As-mode, is compared with its original counterpart, called A-mode. The GNSS position time-series consisting of a linear trend, seasonal signals, offsets, and white plus coloured noise are simulated for the numerical comparison. The overall performance of the algorithm is significantly improved using the As-mode algorithm. The multivariate analysis shows that the truly detected offsets' percentage (true positive) increases from 52.9 per cent for A-mode to 61.1 per cent for As-mode. Further, the falsely detected offsets' percentage (false positive) is reduced from 40.6 per cent to 29.8 per cent. The algorithm was also tested on the DOGEx data set. The results indicate that the proposed method outperforms the existing solutions, with TP, FP and FN being 33.3 per cent, 32.3 per cent and 34.4 per cent, respectively. Also, in 90 per cent of the station, velocities are estimated at a 0.8 mm yr−1 distance from the simulated values.


2020 ◽  
Vol 12 (18) ◽  
pp. 2961 ◽  
Author(s):  
Riccardo Lanari ◽  
Manuela Bonano ◽  
Francesco Casu ◽  
Claudio De Luca ◽  
Michele Manunta ◽  
...  

We present in this work an advanced processing pipeline for continental scale differential synthetic aperture radar (DInSAR) deformation time series generation, which is based on the parallel small baseline subset (P-SBAS) approach and on the joint exploitation of Sentinel-1 (S-1) interferometric wide swath (IWS) SAR data, continuous global navigation satellite system (GNSS) position time-series, and cloud computing (CC) resources. We first briefly describe the basic rationale of the adopted P-SBAS processing approach, tailored to deal with S-1 IWS SAR data and to be implemented in a CC environment, highlighting the innovative solutions that have been introduced in the processing chain we present. They mainly consist in a series of procedures that properly exploit the available GNSS time series with the aim of identifying and filtering out possible residual atmospheric artifacts that may affect the DInSAR measurements. Moreover, significant efforts have been carried out to improve the P-SBAS processing pipeline automation and robustness, which represent crucial issues for interferometric continental scale analysis. Then, a massive experimental analysis is presented. In this case, we exploit: (i) the whole archive of S-1 IWS SAR images acquired over a large portion of Europe, from descending orbits, (ii) the continuous GNSS position time series provided by the Nevada Geodetic Laboratory at the University of Nevada, Reno, USA (UNR-NGL) available for the investigated area, and (iii) the ONDA platform, one of the Copernicus Data and Information Access Services (DIAS). The achieved results demonstrate the capability of the proposed solution to successfully retrieve the DInSAR time series relevant to such a huge area, opening new scenarios for the analysis and interpretation of these ground deformation measurements.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2298 ◽  
Author(s):  
Wudong Li ◽  
Weiping Jiang ◽  
Zhao Li ◽  
Hua Chen ◽  
Qusen Chen ◽  
...  

Removal of the common mode error (CME) is very important for the investigation of global navigation satellite systems’ (GNSS) error and the estimation of an accurate GNSS velocity field for geodynamic applications. The commonly used spatiotemporal filtering methods normally process the evenly spaced time series without missing data. In this article, we present the variational Bayesian principal component analysis (VBPCA) to estimate and extract CME from the incomplete GNSS position time series. The VBPCA method can naturally handle missing data in the Bayesian framework and utilizes the variational expectation-maximization iterative algorithm to search each principal subspace. Moreover, it could automatically select the optimal number of principal components for data reconstruction and avoid the overfitting problem. To evaluate the performance of the VBPCA algorithm for extracting CME, 44 continuous GNSS stations located in Southern California were selected. Compared to previous approaches, VBPCA could achieve better performance with lower CME relative errors when more missing data exists. Since the first principal component (PC) extracted by VBPCA is remarkably larger than the other components, and its corresponding spatial response presents nearly uniform distribution, we only use the first PC and its eigenvector to reconstruct the CME for each station. After filtering out CME, the interstation correlation coefficients are significantly reduced from 0.43, 0.46, and 0.38 to 0.11, 0.10, and 0.08, for the north, east, and up (NEU) components, respectively. The root mean square (RMS) values of the residual time series and the colored noise amplitudes for the NEU components are also greatly suppressed, with average reductions of 27.11%, 28.15%, and 23.28% for the former, and 49.90%, 54.56%, and 49.75% for the latter. Moreover, the velocity estimates are more reliable and precise after removing CME, with average uncertainty reductions of 51.95%, 57.31%, and 49.92% for the NEU components, respectively. All these results indicate that the VBPCA method is an alternative and efficient way to extract CME from regional GNSS position time series in the presence of missing data. Further work is still required to consider the effect of formal errors on the CME extraction during the VBPCA implementation.


2020 ◽  
Author(s):  
wudong li ◽  
weiping jiang

<p>Removal of the Common Mode Error (CME) is very important for the investigation of Global Navigation Satellite Systems (GNSS) technique error and the estimation of accurate GNSS velocity field for geodynamic applications. The commonly used spatiotemporal filtering methods cannot accommodate missing data, or they have high computational complexity when dealing with incomplete data. This research presents the Expectation-Maximization Principal Component Analysis (EMPCA) to estimate and extract CME from the incomplete GNSS position time series. The EMPCA method utilizes an Expectation-Maximization iterative algorithm to search each principal subspace, which allows extracting a few eigenvectors and eigenvalues without covariance matrix and eigenvalue decomposition computation. Moreover, it could straightforwardly handle the missing data by Maximum Likelihood Estimation (MLE) at each iteration. To evaluate the performance of the EMPCA algorithm for extracting CME, 44 continuous GNSS stations located in Southern California have been selected here. Compared to previous approaches, EMPCA could achieve better performance using less computational time and exhibit slightly lower CME relative errors when more missing data exists. Since the first Principal Component (PC) extracted by EMPCA is remarkably larger than the other components, and its corresponding spatial response presents nearly uniform distribution, we only use the first PC and its eigenvector to reconstruct the CME for each station. After filtering out CME, the interstation correlation coefficients are significantly reduced from 0.46, 0.49, 0.42 to 0.18, 0.17, 0.13 for the North, East, and Up (NEU) components, respectively. The Root Mean Square (RMS) values of the residual time series and the colored noise amplitudes for the NEU components are also greatly suppressed, with an average reduction of 25.9%, 27.4%, 23.3% for the former, and 49.7%, 53.9%, and 48.9% for the latter. Moreover, the velocity estimates are more reliable and precise after removing CME, with an average uncertainty reduction of 52.3%, 57.5%, and 50.8% for the NEU components, respectively. All these results indicate that the EMPCA method is an alternative and more efficient way to extract CME from regional GNSS position time series in the presence of missing data. Further work is still required to consider the effect of formal errors on the CME extraction during the EMPCA implementation.</p>


2020 ◽  
Vol 12 (6) ◽  
pp. 992 ◽  
Author(s):  
Kunpu Ji ◽  
Yunzhong Shen ◽  
Fengwei Wang

The daily position time series derived by Global Navigation Satellite System (GNSS) contain nonlinear signals which are suitably extracted by using wavelet analysis. Considering formal errors are also provided in daily GNSS solutions, a weighted wavelet analysis is proposed in this contribution where the weight factors are constructed via the formal errors. The proposed approach is applied to process the position time series of 27 permanent stations from the Crustal Movement Observation Network of China (CMONOC), compared to traditional wavelet analysis. The results show that the proposed approach can extract more exact signals than traditional wavelet analysis, with the average error reductions are 13.24%, 13.53% and 9.35% in north, east and up coordinate components, respectively. The results from 500 simulations indicate that the signals extracted by proposed approach are closer to true signals than the traditional wavelet analysis.


Sign in / Sign up

Export Citation Format

Share Document