scholarly journals Weights Allocation Based on Optimal Uncertainty Evaluation

Author(s):  
Ye Xiaoming ◽  
Ding Shijun ◽  
Liu Haibo

Abstract In the traditional measurement theory, precision is defined as the dispersion of measured value, and is used as the basis of weights calculation in the adjustment of measurement data with different qualities, which leads to the trouble that trueness is completely ignored in the weight allocation. In this paper, following the pure concepts of probability theory, the measured value (observed value) is regarded as a constant, the error as a random variable, and the variance is the dispersion of all possible values of an unknown error. Thus, a rigorous formula for weights calculation and variance propagation is derived, which solves the theoretical trouble of determining the weight values in the adjustment of multi-channel observation data with different qualities. The results show that the optimal weights are not only determined by the covariance array of observation errors, but also related to the model of adjustment.

2021 ◽  
Author(s):  
Ye Xiaoming ◽  
Ding Shijun ◽  
Liu Haibo

Abstract In the traditional measurement theory, precision is defined as the dispersion of measured value, and is used as the basis of weights calculation in the adjustment of measurement data with different qualities, which leads to the trouble that trueness is completely ignored in the weight allocation. In this paper, following the pure concepts of probability theory, the measured value (observed value) is regarded as a constant, the error as a random variable, and the variance is the dispersion of all possible values of an unknown error. Thus, a rigorous formula for weights calculation and variance propagation is derived, which solves the theoretical problem of determining the weight values in the adjustment of multi-channel observation data with different qualities. The results show that the optimal weights are not only determined by the covariance array of observation errors, but also related to the model of adjustment.


2021 ◽  
Author(s):  
Ye Xiaoming ◽  
Ding Shijun ◽  
Liu Haibo

Abstract In the traditional measurement theory, precision is defined as the dispersion of measured value, and is used as the basis of weights calculation in the adjustment of measurement data with different qualities, which leads to the trouble that trueness is completely ignored in the weight allocation. In this paper, following the pure concepts of probability theory, the measured value (observed value) is regarded as a constant, the error as a random variable, and the variance is the dispersion of all possible values of an unknown error. Thus, a rigorous formula for weights calculation and variance propagation is derived, which solves the theoretical trouble of determining the weight values in the adjustment of multi-channel observation data with different qualities. The results show that the optimal weights are not only determined by the covariance array of observation errors, but also related to the model of adjustment.


2021 ◽  
Author(s):  
Ye Xiaoming ◽  
Ding Shijun

Abstract In the traditional measurement theory, precision is defined as the dispersion of measured value, and is used as the basis of weights calculation in the adjustment of measurement data with different qualities, which leads to the trouble that trueness is completely ignored in the weight allocation. In this paper, following the pure concepts of probability theory, the measured value (observed value) is regarded as a constant, the error as a random variable, and the variance is the dispersion of all possible values of an unknown error. Thus, a rigorous formula for weights calculation and variance propagation is derived, which solves the theoretical problem of determining the weight values in the adjustment of multi-channel observation data with different qualities. The results show that the optimal weights are not only determined by the covariance array of observation errors, but also related to the model of adjustment.


2021 ◽  
Author(s):  
Ye Xiaoming ◽  
Ding Shijun

Abstract In the traditional measurement theory, precision is defined as the dispersion of measured value, and is used as the basis of weights calculation in the adjustment of measurement data with different qualities, which leads to the trouble that trueness is completely ignored in the weight allocation. In this paper, following the concepts of pure probability theory, the measured value (observed value) is regarded as a constant, the error as a random variable, and the variance is interpreted as the dispersion of all possible values of the error. Thus, a rigorous formula for weights calculation and variance propagation is derived, which solves the theoretical problem of determining the weight values in the adjustment of multi-channel observation data with different qualities.


Author(s):  
J. F. C. Kingman

1. A type of problem which frequently occurs in probability theory and statistics can be formulated in the following way. We are given real-valued functions f(x), gi(x) (i = 1, 2, …, k) on a space (typically finite-dimensional Euclidean space). Then the problem is to set bounds for Ef(X), where X is a random variable taking values in , about which all we know is the values of Egi(X). For example, we might wish to set bounds for P(X > a), where X is a real random variable with some of its moments given.


2018 ◽  
Vol 47 (2) ◽  
pp. 53-67 ◽  
Author(s):  
Jalal Chachi

In this paper, rst a new notion of fuzzy random variables is introduced. Then, usingclassical techniques in Probability Theory, some aspects and results associated to a randomvariable (including expectation, variance, covariance, correlation coecient, etc.) will beextended to this new environment. Furthermore, within this framework, we can use thetools of general Probability Theory to dene fuzzy cumulative distribution function of afuzzy random variable.


Agromet ◽  
2011 ◽  
Vol 25 (1) ◽  
pp. 24
Author(s):  
Satyanto Krido Saptomo

<em>Artificial neural network (ANN) approach was used to model energy dissipation process into sensible heat and latent heat (evapotranspiration) fluxes. The ANN model has 5 inputs which are leaf temperature T<sub>l</sub>, air temperature T<sub>a</sub>, net radiation R<sub>n</sub>, wind speed u<sub>c</sub> and actual vapor pressure e<sub>a</sub>. Adjustment of ANN was conducted using back propagation technique, employing measurement data of input and output parameters of the ANN. The estimation results using the adjusted ANN shows its capability in resembling the heat dissipation process by giving outputs of sensible and latent heat fluxes closed to its respective measurement values as the measured input values are given.  The ANN structure presented in this paper suits for modeling similar process over vegetated surfaces, but the adjusted parameters are unique. Therefore observation data set for each different vegetation and adjustment of ANN are required.</em>


2021 ◽  
Vol 11 (2) ◽  
pp. 300-314
Author(s):  
Tetiana Malovichko

The paper is devoted to the study of what changes the course of the probability theory has undergone from the end of the 19th century to our time based on the analysis of The Theory of Probabilities textbook by Vasyl P. Ermakov published in 1878. In order to show the competence of the author of this textbook, his biography and creative development of V. P. Ermakov, a famous mathematician, Corresponding Member of the St. Petersburg Academy of Sciences, have been briefly reviewed. He worked at the Department of Pure Mathematics at Kyiv University, where he received the title of Honored Professor, headed the Department of Higher Mathematics at the Kyiv Polytechnic Institute, published the Journal of Elementary Mathematics, and he was one of the founders of the Kyiv Physics and Mathematics Society. The paper contains a comparative analysis of The Probability Theory textbook and modern educational literature. V. P. Ermakov's textbook uses only the classical definition of probability. It does not contain such concepts as a random variable, distribution function, however, it uses mathematical expectation. V. P. Ermakov insists on excluding the concept of moral expectation accepted in the science of that time from the probability theory. The textbook consists of a preface, five chapters, a synopsis containing the statements of the main results, and a collection of tasks with solutions and instructions. The first chapter deals with combinatorics, the presentation of which does not differ much from its modern one. The second chapter introduces the concepts of event and probability. Although operations on events have been not considered at all; the probabilities of intersecting and combining events have been discussed. However, the above rule for calculating the probability of combining events is generally incorrect for compatible events. The third chapter is devoted to events during repeated tests, mathematical expectation and contains Bernoulli's theorem, from which the law of large numbers follows. The next chapter discusses conditional probabilities, the simplest version of the conditional mathematical expectation, the total probability formula and the Bayesian formula (in modern terminology). The last chapter is devoted to the Jordan method and its applications. This method is not found in modern educational literature. From the above, we can conclude that the probability theory has made significant progress since the end of the 19th century. Basic concepts are formulated more rigorously; research methods have developed significantly; new sections have appeared.


2016 ◽  
Vol 24 (1) ◽  
pp. 29-41 ◽  
Author(s):  
Roman Frič ◽  
Martin Papčo

Abstract The influence of “Grundbegriffe” by A. N. Kolmogorov (published in 1933) on education in the area of probability and its impact on research in stochastics cannot be overestimated. We would like to point out three aspects of the classical probability theory “calling for” an upgrade: (i) classical random events are black-and-white (Boolean); (ii) classical random variables do not model quantum phenomena; (iii) basic maps (probability measures and observables { dual maps to random variables) have very different “mathematical nature”. Accordingly, we propose an upgraded probability theory based on Łukasiewicz operations (multivalued logic) on events, elementary category theory, and covering the classical probability theory as a special case. The upgrade can be compared to replacing calculations with integers by calculations with rational (and real) numbers. Namely, to avoid the three objections, we embed the classical (Boolean) random events (represented by the f0; 1g-valued indicator functions of sets) into upgraded random events (represented by measurable {0; 1}-valued functions), the minimal domain of probability containing “fractions” of classical random events, and we upgrade the notions of probability measure and random variable.


Sign in / Sign up

Export Citation Format

Share Document