scholarly journals Formation of the Convergence Functions of Errors of Input Data of Measurement Systems Computing Components on the Basis of the Finite Automatics Theory

2019 ◽  
pp. 37-40
Author(s):  
O. Krychevets

This paper presents the results of an investigation into the behavior of the functions of transforming the input data errors for different types of measurement systems’ computing components in order to use their generalized models developed on the basis of the finite automata theory. It is shown that, depending on the kind and value of an input data error transformation function (metrological condition of computing components), the errors of measurement results obtained with the systems’ measuring channels have a determinate character of changes in both static and dynamic regimes of computing components. Determined are the basic dependences of the errors of measurement results upon the input data errors, and upon the types of input data transformation functions; given are the results of their calculation. The investigation results demonstrate a linear character of the dependence of measurement result errors upon the input data errors ΔХ{(tn). In addition, the transformation function calculation f = ΔY{(tn)/ΔХ{(tn) gives its steady state value f = 1,0, i.e. a computing component does not transform the input data error, and does not reverse its sign. For the iterative procedures, the input data errors do not affect the final measurement result, and its accuracy. The measurement error values Δуn depend on the iteration number, and decrease with the increasing number. Of particular interest is the behavior of the function of transforming the input data errors: first, its values are dependent upon the number of iterations; second, f < 1, which clearly shows that the input data errors decrease with the increa­sing number of iterations; and third, the availability of values f = 0 indicates that the function of transforming the input data errors is able to «swallow up» the input data error at the end of the computational procedure. For the linear-chain structures, data have been obtained for a predominantly linear dependence of the measurement error Δs on the input data error Δх, and for the absence of the chain’s transformation function f dependence on the input data errors Δх. For the computing components having a cyclic structure, typi­cal is the same dependence of measurement errors Δt on the input data errors and on the behavior of transformation function ft/x which are specific to the above mentioned computing components that rea­lize iterative procedures. The difference is that the computing components having a cyclic structure realize the so-called (sub)space iteration as opposed to the time iteration specific to the computing components considered. The computing components having a complicated structure (e.g. serial-cyclic, serial-parallel, etc.) demonstrate the dependence of measurement errors on the input data errors which is specific to the linear link that, with such a structure, is determinative for eva­luating the measurement error. Also the function of transforming the input data errors behaves similarly.

2012 ◽  
Vol 241-244 ◽  
pp. 149-155
Author(s):  
Chuan Xing ◽  
Hai Zhang

A dodecahedron non-orthogonal redundant IMU configuration was selected as model. To improve fusion accuracy, we proposed an effective calculation method for measurement errors based on the correlation between measurement errors and fusion errors. The method considered the difference between traditional data fusion vector’s projection and measurement results, and then made a conversion from projection error to measurement error. Combined with optimal weighted least square method, measurement error was used to generate an optimal weighted matrix, and this made data fusion errors minimum. Simulations also proved that the fusion result of this method is more accurate than the result of traditional method.


2005 ◽  
Vol 52 (6) ◽  
pp. 167-175 ◽  
Author(s):  
K. Beven

A consideration of model structural error leads to some particularly interesting tensions in the model calibration/conditioning process. In applying models we can usually only assess the total error on some output variable for which we have observations. This total error may arise due to input and boundary condition errors, model structural errors and error on the output observation itself (not only measurement error but also as a result of differences in meaning between what is modelled and what is measured). Statistical approaches to model uncertainty generally assume that the errors can be treated as an additive term on the (possibly transformed) model output. This allows for compensation of all the sources of error, as if the model predictions are correct and the total error can be treated as “measurement error.” Model structural error is not easily evaluated within this framework. An alternative approach to put more emphasis on model evaluation and rejection is suggested. It is recognised that model success or failure within this framework will depend heavily on an assessment of both input data errors (the “perfect” model will not produce acceptable results if driven with poor input data) and effective observation error (including a consideration of the meaning of observed variables relative to those predicted by a model).


2020 ◽  
pp. 3-8
Author(s):  
L.F. Vitushkin ◽  
F.F. Karpeshin ◽  
E.P. Krivtsov ◽  
P.P. Krolitsky ◽  
V.V. Nalivaev ◽  
...  

The State special primary acceleration measurement standard for gravimetry (GET 190-2019), its composition, principle of operation and basic metrological characteristics are presented. This standard is on the upper level of reference for free-fall acceleration measurements. Its accuracy and reliability were improved as a result of optimisation of the adjustment procedures for measurement systems and its integration within the upgraded systems, units and modern hardware components. A special attention was given to adjusting the corrections applied to measurement results with respect to procedural, physical and technical limitations. The used investigation methods made it possibled to confirm the measurement range of GET 190-2019 and to determine the contributions of main sources of errors and the total value of these errors. The measurement characteristics and GET 90-2019 were confirmed by the results obtained from measurements of the absolute value of the free fall acceleration at the gravimetrical site “Lomonosov-1” and by their collation with the data of different dates obtained from measurements by high-precision foreign and domestic gravimeters. Topicality of such measurements ensues from the requirements to handle the applied problems that need data on parameters of the Earth gravitational field, to be adequately faced. Geophysics and navigation are the main fields of application for high-precision measurements in this field.


2020 ◽  
pp. 66-72
Author(s):  
Irina A. Piterskikh ◽  
Svetlana V. Vikhrova ◽  
Nina G. Kovaleva ◽  
Tatyana O. Barynskaya

Certified reference materials (CRM) composed of propyl (11383-2019) and isopropyl (11384-2019) alcohols solutions were created for validation of measurement procedures and control of measurement errors of measurement results of mass concentrations of toxic substances (alcohol) in biological objects (urine, blood) and water. Two ways of establishing the value of the certified characteristic – mass consentration of propanol-1 or propanol-2 have been studied. The results obtained by the preparation procedure and comparison with the standard are the same within the margin of error.


2017 ◽  
Vol 928 (10) ◽  
pp. 58-63 ◽  
Author(s):  
V.I. Salnikov

The initial subject for study are consistent sums of the measurement errors. It is assumed that the latter are subject to the normal law, but with the limitation on the value of the marginal error Δpred = 2m. It is known that each amount ni corresponding to a confidence interval, which provides the value of the sum, is equal to zero. The paradox is that the probability of such an event is zero; therefore, it is impossible to determine the value ni of where the sum becomes zero. The article proposes to consider the event consisting in the fact that some amount of error will change value within 2m limits with a confidence level of 0,954. Within the group all the sums have a limit error. These tolerances are proposed to use for the discrepancies in geodesy instead of 2m*SQL(ni). The concept of “the law of the truncated normal distribution with Δpred = 2m” is suggested to be introduced.


2021 ◽  
pp. 1-22
Author(s):  
Daisuke Kurisu ◽  
Taisuke Otsu

This paper studies the uniform convergence rates of Li and Vuong’s (1998, Journal of Multivariate Analysis 65, 139–165; hereafter LV) nonparametric deconvolution estimator and its regularized version by Comte and Kappus (2015, Journal of Multivariate Analysis 140, 31–46) for the classical measurement error model, where repeated noisy measurements on the error-free variable of interest are available. In contrast to LV, our assumptions allow unbounded supports for the error-free variable and measurement errors. Compared to Bonhomme and Robin (2010, Review of Economic Studies 77, 491–533) specialized to the measurement error model, our assumptions do not require existence of the moment generating functions of the square and product of repeated measurements. Furthermore, by utilizing a maximal inequality for the multivariate normalized empirical characteristic function process, we derive uniform convergence rates that are faster than the ones derived in these papers under such weaker conditions.


2000 ◽  
Vol 30 (2) ◽  
pp. 306-310 ◽  
Author(s):  
M S Williams ◽  
H T Schreuder

Assuming volume equations with multiplicative errors, we derive simple conditions for determining when measurement error in total height is large enough that only using tree diameter, rather than both diameter and height, is more reliable for predicting tree volumes. Based on data for different tree species of excurrent form, we conclude that measurement errors up to ±40% of the true height can be tolerated before inclusion of estimated height in volume prediction is no longer warranted.


2002 ◽  
pp. 323-332 ◽  
Author(s):  
A Sartorio ◽  
G De Nicolao ◽  
D Liberati

OBJECTIVE: The quantitative assessment of gland responsiveness to exogenous stimuli is typically carried out using the peak value of the hormone concentrations in plasma, the area under its curve (AUC), or through deconvolution analysis. However, none of these methods is satisfactory, due to either sensitivity to measurement errors or various sources of bias. The objective was to introduce and validate an easy-to-compute responsiveness index, robust in the face of measurement errors and interindividual variability of kinetics parameters. DESIGN: The new method has been tested on responsiveness tests for the six pituitary hormones (using GH-releasing hormone, thyrotrophin-releasing hormone, gonadotrophin-releasing hormone and corticotrophin-releasing hormone as secretagogues), for a total of 174 tests. Hormone concentrations were assayed in six to eight samples between -30 min and 120 min from the stimulus. METHODS: An easy-to-compute direct formula has been worked out to assess the 'stimulated AUC', that is the part of the AUC of the response curve depending on the stimulus, as opposed to pre- and post-stimulus spontaneous secretion. The weights of the formula have been reported for the six pituitary hormones and some popular sampling protocols. RESULTS AND CONCLUSIONS: The new index is less sensitive to measurement error than the peak value. Moreover, it provides results that cannot be obtained from a simple scaling of either the peak value or the standard AUC. Future studies are needed to show whether the reduced sensitivity to measurement error and the proportionality to the amount of released hormone render the stimulated AUC indeed a valid alternative to the peak value for the diagnosis of the different pathophysiological states, such as, for instance, GH deficits.


2014 ◽  
Vol 912-914 ◽  
pp. 1172-1176 ◽  
Author(s):  
Yong Han ◽  
Mu Qiao ◽  
Xue Wan ◽  
Qing Chang

In order to guarantee out-of-gauge freight transport safety, it is necessary to study digital measurement to improve measuring accuracy for out-of-gauge goods which is very difficult to master outline dimension. The process of edge detection and the algorithm of canny operator edge detection were introduced in this paper. Taking the transformer for example, contour extraction was carried out on the end and the widths of transverse and longitudinal height are measured through the mathematical software MATLAB. Finally, measurement results are validated based on the known data. The results showed that canny operator has high calculate precision, and the closer to the part calibrated with pixel calibration, smaller the measurement errors are.


1999 ◽  
Vol 56 (7) ◽  
pp. 1234-1240
Author(s):  
W R Gould ◽  
L A Stefanski ◽  
K H Pollock

All catch-effort estimation methods implicitly assume catch and effort are known quantities, whereas in many cases, they have been estimated and are subject to error. We evaluate the application of a simulation-based estimation procedure for measurement error models (J.R. Cook and L.A. Stefanski. 1994. J. Am. Stat. Assoc. 89: 1314-1328) in catch-effort studies. The technique involves a simulation component and an extrapolation step, hence the name SIMEX estimation. We describe SIMEX estimation in general terms and illustrate its use with applications to real and simulated catch and effort data. Correcting for measurement error with SIMEX estimation resulted in population size and catchability coefficient estimates that were substantially less than naive estimates, which ignored measurement errors in some cases. In a simulation of the procedure, we compared estimators from SIMEX with "naive" estimators that ignore measurement errors in catch and effort to determine the ability of SIMEX to produce bias-corrected estimates. The SIMEX estimators were less biased than the naive estimators but in some cases were also more variable. Despite the bias reduction, the SIMEX estimator had a larger mean squared error than the naive estimator for one of two artificial populations studied. However, our results suggest the SIMEX estimator may outperform the naive estimator in terms of bias and precision for larger populations.


Sign in / Sign up

Export Citation Format

Share Document