scholarly journals Analysis of the error distribution density convergence with its orthogonal decomposition in navigation measurements

2021 ◽  
Vol 2090 (1) ◽  
pp. 012126
Author(s):  
Igor Vorokhobin ◽  
Iryna Zhuravska ◽  
Igor Burmaka ◽  
Inessa Kulakovska

Abstract Modern trends towards the expansion of online services lead to the need to determine the location of customers, who may also be on a moving object (vessel or aircraft, others vehicle – hereinafter the “Vehicle”). This task is of particular relevance in the fields of medicine – when organizing video conferencing for diagnosis and/or remote rehabilitation, e.g., for post-infarction and post-stroke patients using wireless devices, in education – when organizing distance learning and when taking exams online, etc. For the analysis of statistical materials of the accuracy of determining the location of a moving object, the Gaussian normal distribution is usually used. However, if the histogram of the sample has “heavier tails”, the determination of latitude and longitude’s error according to Gaussian function is not correct and requires an alternative approach. To describe the random errors of navigation measurements, mixed laws of a probability distribution of two types can be used: the first type is the generalized Cauchy distribution, the second type is the Pearson distribution, type VII. This paper has shown that it’s possible obtaining the decomposition of the error distribution density using orthogonal Hermite polynomials, without having its analytical expression. Our numerical results show that the approximation of the distribution function using the Gram-Charlier series of type A makes it possible to apply the orthogonal decomposition to describe the density of errors in navigation measurements. To compare the curves of density and its orthogonal decomposition, the density values were calculated. The research results showed that the normalized density and its orthogonal decomposition practically coincide.

1999 ◽  
Vol 32 (4) ◽  
pp. 730-735 ◽  
Author(s):  
F. Sánchez-Bajo ◽  
F. L. Cumbrera

In recent years, several profile-shape functions have been successfully used in X-ray powder diffraction studies. Here, a new profile function for approximating the X-ray diffraction peaks is proposed. This model, based on a Gaussian function multiplied by a correction factor in the form of a series expansion in Hermite polynomials, can be employed in the cases where there are peak asymmetries. The function has been tested by using samples of α-Al2O3and 9-YSZ (yttria-stabilized zirconia), yielding generally satisfactory results.


2010 ◽  
Vol 439-440 ◽  
pp. 1153-1158
Author(s):  
Pan Xiong ◽  
Shuan Li Yuan ◽  
Shao Jie Cheng

The distribution of observation errors is determined according to their magnitudes by using the distribution collocation test method or figure method taking into account the result, sample total, the interval density etc. It is therefore difficult to get the specific type of error distribution of observations by conventional methods. In analyzing the actual situation of the observation error distribution using their statistical properties, this paper proposes the use of unsymmetrical distribution to express the true distribution of the observation errors. The P-norm distribution is a generalized form of a group of error distributions, and from the statistical properties of random errors we can arrive at an unsymmetrical P-norm distribution according to the practical situation of the occurrence of random errors. The common P-norm distribution is the specific case of this distribution. This paper deduces the density function equation of the unsymmetrical P-norm distribution, obtained the statistical properties of the distribution function and the evaluation of precision index. By choosing appropriate value for p, we can get closer to the distribution function of the true error distribution.


2018 ◽  
Vol 22 (10) ◽  
pp. 5243-5257 ◽  
Author(s):  
Simon Etter ◽  
Barbara Strobl ◽  
Jan Seibert ◽  
H. J. Ilja van Meerveld

Abstract. Previous studies have shown that hydrological models can be parameterised using a limited number of streamflow measurements. Citizen science projects can collect such data for otherwise ungauged catchments but an important question is whether these observations are informative given that these streamflow estimates will be uncertain. We assess the value of inaccurate streamflow estimates for calibration of a simple bucket-type runoff model for six Swiss catchments. We pretended that only a few observations were available and that these were affected by different levels of inaccuracy. The level of inaccuracy was based on a log-normal error distribution that was fitted to streamflow estimates of 136 citizens for medium-sized streams. Two additional levels of inaccuracy, for which the standard deviation of the error distribution was divided by 2 and 4, were used as well. Based on these error distributions, random errors were added to the measured hourly streamflow data. New time series with different temporal resolutions were created from these synthetic streamflow time series. These included scenarios with one observation each week or month, as well as scenarios that are more realistic for crowdsourced data that generally have an irregular distribution of data points throughout the year, or focus on a particular season. The model was then calibrated for the six catchments using the synthetic time series for a dry, an average and a wet year. The performance of the calibrated models was evaluated based on the measured hourly streamflow time series. The results indicate that streamflow estimates from untrained citizens are not informative for model calibration. However, if the errors can be reduced, the estimates are informative and useful for model calibration. As expected, the model performance increased when the number of observations used for calibration increased. The model performance was also better when the observations were more evenly distributed throughout the year. This study indicates that uncertain streamflow estimates can be useful for model calibration but that the estimates by citizen scientists need to be improved by training or more advanced data filtering before they are useful for model calibration.


2004 ◽  
Vol 10 (6) ◽  
pp. 894-898 ◽  
Author(s):  
ANTON D. HINTON-BAYRE

It is important to preface this piece by advising the reader that the author is not writing from the point of view of a statistician, but rather that of a user of reliable change. The author was invited to comment following the publication of an original inquiry concerning Reliable Change Index (RCI) formulae (Hinton-Bayre, 2000) and after acting as a reviewer for the current Maassen paper (this issue, pp. 888–893). Having been a bystander in the development of various RCI methods, this comment serves to represent the struggle of a non-statistician to understand the relevant statistical issues and apply them to clinical decisions. When I first stumbled across the ‘classical’ RCI attributed to Jacobson and Truax (1991) (Maassen, this issue, Equation 4), I was quite excited and immediately applied the formula to my own data (Hinton-Bayre et al., 1999). Later, upon reading the Temkin et al. (1999) paper I commented on what seemed to be an inconsistency in their calculation of the error term (Hinton-Bayre, 2000). My “confusion” as Maassen suggests was derived from the fact that I noted the error term used was based on the standard deviation of the difference scores (Maassen, Expression 5*) rather than the Jacobson and Truax formula (Maassen, Expression 4). This apparent anomaly was subsequently addressed when Temkin et al. (2000) explained they had employed the error term proposed by Christensen and Mendoza (1986) (Maassen, Expression 5). My concern with the Maassen manuscript was that it initially appeared two separate values could be derived through using expressions 5 and 5* using the Temkin et al. (1999) data. This suggested there might be four (expressions 4, 5, 5*, and 6), rather than three, ways to calculate the reliable change error term based on a null hypothesis model. Once again I was confused. Only very recently did I discover that expressions 5 and 5* yield identical results when applied to the same data set (N.R. Temkin, personal communication) and when estimated variances are used (G. Maassen, personal communication). The reason for expressions 5 and 5* yielding slightly different error term values using the Temkin et al. (1999) data was due to use of nonidentical samples for parameter estimation. The use of non-identical samples came to light in the review process of the present Maassen paper—which Maassen now indicates in an author's note. Thus there were indeed only three approaches to consider (Expressions 4, 5, & 6). Nonetheless, Maassen maintains (personal communication) that Expression 5, as elaborated by Christensen and Mendoza (1986), represents random errors comprising the error distribution of a given person, whereas Expression 5* refers to the error distribution of a given sample. While it seems clear on the surface that the expressions represent separate statistical entities, it remains unclear to the present author how these expressions can then yield identical values when applied to test–retest data derived from a single normative group. Unfortunately however, my confusion does not stop there.


Author(s):  
BO YANG ◽  
JAN FLUSSER ◽  
TOMÁŠ SUK

Steerability is a useful and important property of "kernel" functions. It enables certain complicated operations involving orientation manipulation on images to be executed with high efficiency. Thus, we focus our attention on the steerability of Hermite polynomials and their versions modulated by the Gaussian function with different powers, defined as the Hermite kernel. Certain special cases of such kernel, Hermite polynomials, Hermite functions and Gaussian derivatives are discussed in detail. Correspondingly, these cases demonstrate that the Hermite kernel is a powerful and effective tool for image processing. Furthermore, the steerability of the Hermite kernel is proved with the help of a property of Hermite polynomials revealing the rule concerning the product of two Hermite polynomials after coordination rotation. Consequently, any order of the Hermite kernel inherits steerability. Moreover, a couple sets of an explicit interpolation function and basis function can be directly obtained. We provide some examples to verify steerability of the Hermite kernel. Experimental results show the effectiveness of steerability and its potential applications in the fields of image processing and computer vision.


2018 ◽  
Author(s):  
Simon Etter ◽  
Barbara Strobl ◽  
Jan Seibert ◽  
Ilja van Meerveld

Abstract. Previous studies have shown that a hydrological model can be parameterized using on a limited number of streamflow measurements for otherwise ungauged basins. Citizen science projects can collect such data but an important question is whether these observations are informative given that these streamflow estimates will be uncertain. We address the value of inaccurate streamflow estimates for calibration of a simple bucket-type runoff model for six Swiss catchments. We pretended that only a few observations were available and that these were affected by different levels of inaccuracy. The initial inaccuracy level was based on a log-normal error distribution that was fitted to streamflow estimates of 136 citizens for medium-sized streams. Two additional levels of inaccuracy, for which the standard deviation of the error-distribution was divided by two and four, were used as well. Based on these error distributions, random errors were added to the measured hourly streamflow data. New time series with different temporal resolutions were created from these synthetic time series. These included scenarios with one observation each week or month and scenarios that are more realistic for crowdsourced datasets with irregular distributions throughout the year or a focus on spring or summer. The model was then calibrated for the six catchments using the synthetic time series for a dry, an average and a wet year. The performance of the calibrated models was evaluated based on the measured hourly streamflow time series. The results indicate that streamflow estimates from untrained citizens are not informative for model calibration. However, if the errors can be reduced, the estimates are informative and useful for model parameterization. As expected, the model performance increased when the number of observations used for calibration increased. The model performance was also better when the observations were more evenly distributed throughout the year. This study indicates that uncertain streamflow estimates can be useful for model calibration but that the estimates by citizen scientists need to be improved by training or more advanced data filtering before they are useful for model calibration.


Author(s):  
T. G. Aslanov ◽  
U. A. Musaeva

Objectives The purpose of the study is to obtain an expression for determining the coordinates of the earthquake focus using the ellipsoid method, as well as testing the possibility of using the method using the figures of the second order of the ellipsoid during the initial determination of the coordinates of the earthquake hypocenter.Method A comparative analysis of the probability density of errors in the hypocentral zone of the earth's surface, in combination with various spheres, ellipsoid and hyperboloid and ellipsoids, is carried out.Result Obtained an expression for determining the coordinates of the earthquake focus by the method of ellipsoids, as well as the density of the distribution of error probabilities in the determination of the earthquake hypocenter the in calculations by the method of spheres, by the combined method of spheres, hyperboloid and ellipsoid, and also by the method of ellipsoids.Conclusion Methods used to determine the coordinates of the hypocenter ellipsoid, have large errors in comparison with the method of areas. This can explained by the fact that in determining the coordinates of the hypocenter in the sphere method, three errors are used in determining the difference in the travel times of seismic waves, while in the ellipsoid method and the combined method of the sphere of the ellipsoid and hyperboloid have four errors, which introduces final errors in the distribution. All the obtained dependences of the error distribution have the form close to the Cauchy distribution. 


Author(s):  
G. K. Aslanov ◽  
T. G. Aslanov

Objective. The study is aimed at determining the dependence of the average error in calculating the epicenter coordinates of an earthquake on errors in measuring the velocities of seismic waves for various methods of seismic event localization. Error distribution investigation for the method for determining the earthquake hypocenter coordinates using the Cassinian oval. Methods. The problem was solved using statistical methods: methods of frequency and regression analyzes, means comparison method, and uniform search method. Results. A relationship between the accuracy of measuring the velocities of seismic waves when determining the coordinates of an earthquake epicenter were established for four different earthquake hypocenter coordinates calculation methods. A method for determining the earthquake hypocenter coordinates using the fourth-order figure of the Cassinian oval was proposed. The error distribution density of the Cassinian oval method was compared with the ones of other methods. Conclusion. The results obtained make it possible to choose one or another method for calculating the hypocenter coordinates depending on the specific area in which a seismic event occurred and the locations of seismic sensors.


1978 ◽  
Vol 48 ◽  
pp. 7-29
Author(s):  
T. E. Lutz

This review paper deals with the use of statistical methods to evaluate systematic and random errors associated with trigonometric parallaxes. First, systematic errors which arise when using trigonometric parallaxes to calibrate luminosity systems are discussed. Next, determination of the external errors of parallax measurement are reviewed. Observatory corrections are discussed. Schilt’s point, that as the causes of these systematic differences between observatories are not known the computed corrections can not be applied appropriately, is emphasized. However, modern parallax work is sufficiently accurate that it is necessary to determine observatory corrections if full use is to be made of the potential precision of the data. To this end, it is suggested that a prior experimental design is required. Past experience has shown that accidental overlap of observing programs will not suffice to determine observatory corrections which are meaningful.


Sign in / Sign up

Export Citation Format

Share Document