On Gauss's proof of the normal law of errors

1933 ◽  
Vol 29 (2) ◽  
pp. 231-234 ◽  
Author(s):  
Harold Jeffreys

Gauss gave a well-known proof that under certain conditions the postulate that the arithmetic mean of a number of measures is the most probable estimate of the true value, given the observations, implies the normal law of error. I found recently that in an important practical case the mean is the most probable value, although the normal law does not hold. I suggested an explanation of the apparent discrepancy, but it does not seem to be the true one in the case under consideration.

1. It is widely felt that any method of rejecting observations with large deviations from the mean is open to some suspicion. Suppose that by some criterion, such as Peirce’s and Chauvenet’s, we decide to reject observations with deviations greater than 4 σ, where σ is the standard error, computed from the standard deviation by the usual rule; then we reject an observation deviating by 4·5 σ, and thereby alter the mean by about 4·5 σ/ n , where n is the number of observations, and at the same time we reduce the computed standard error. This may lead to the rejection of another observation deviating from the original mean by less than 4 σ, and if the process is repeated the mean may be shifted so much as to lead to doubt as to whether it is really sufficiently representative of the observations. In many cases, where we suspect that some abnormal cause has affected a fraction of the observations, there is a legitimate doubt as to whether it has affected a particular observation. Suppose that we have 50 observations. Then there is an even chance, according to the normal law, of a deviation exceeding 2·33 σ. But a deviation of 3 σ or more is not impossible, and if we make a mistake in rejecting it the mean of the remainder is not the most probable value. On the other hand, an observation deviating by only 2 σ may be affected by an abnormal cause of error, and then we should err in retaining it, even though no existing rule will instruct us to reject such an observation. It seems clear that the probability that a given observation has been affected by an abnormal cause of error is a continuous function of the deviation; it is never certain or impossible that it has been so affected, and a process that completely rejects certain observations, while retaining with full weight others with comparable deviations, possibly in the opposite direction, is unsatisfactory in principle.


Author(s):  
Oksana Lozovenko ◽  
Yevgeny Sokolov

The authors continue to report about results they have obtained in the process of creating a special introductory one-semester Laboratory Physics course «Search for Physics laws». It is known that the teaching experience and the results of the performed tests show that most students do not acquire the basic skills for conducting an experimental research. This course was built on the basis of the algorithm of systematic construction of students’ skills for carrying out an experimental research. The authors have used Galperin’s stepwise teaching procedure which was developed on the assumption that learning any kind of knowledge involves different kinds of actions. The authors have analysed different ways of how to expound the basic ideas of data analysis, and shown their connection with the point, syncretic and training-interval paradigms. Action diagrams are provided for each type of expounding. As an example of using the training-interval paradigm for teaching first-year students of a technical university, a specially designed lab session is presented in the article. The topic of the session is “The concept of a confidence interval”. Laboratory Work 1 “The Buffon-de Morgan Experiment”. This lab session meets several important requirements: a) the number of computations is minimised; b) a directly measurable quantity is considered; c) students are provided with a “fulcrum” in the form of a priori known true value of a quantity. A general view on measuring physics quantities is summarised in four quite unexpected for students “unpleasant axioms”: 1) none of measured values coincides with the true value of a quantity; 2) the mean of measured values does not coincide with the true value of a quantity; 3) even if, by a lucky chance, one of measured values or the mean coincided with the true value of a quantity, we would never know about it; 4) a confidence interval catches the true value of a measured quantity only in 68% of cases. The authors claim that the presented lab lesson allows demonstrating the equity of these “axioms” clearly and vividly, and that the organised laboratory sessions in the new way are significantly more successful in improving students’ basic skills of error analysis than traditional laboratory sessions.


Author(s):  
S. Yu. Maksyukov ◽  
Nadezhda D. Pilipenko ◽  
K. D. Pilipenko

The relevance of studying the problem of deep incisal overlap (HF) among dentofacial anomalies (CCA) is due to the high prevalence of this pathology. Among modern methods of orthodontic treatment of pathology, the use of bracket systems and aligners is highlighted. The effectiveness of these techniques can be compared in determining the morphometric characteristics obtained by tele-radiography (TEG).Material and methods. The study involved 118 people with hydraulic fracturing, the average age was 38.7 8.5 years (64 women; 54 men). The first group consisted of 49 patients who underwent correction of AFA with eliners; second, 69 patients with bracket systems. To assess the effectiveness of treatment, TRG was performed. To present the results, in the case of quantitative characteristics, the arithmetic mean of the sample value (X) and the error of the mean (m) were calculated. For qualitative signs, the frequency of the sign (%) and its standard error (m%) were calculated.Results: The values ​​of the mandibular angle (G, ArGoMe), and the angles AB / ANS (AB / SpP), APg / ANS (MM), as well as the vertical dimensions of the jaws reached values ​​characteristic of an orthognathic bite. Angle increase SNB, NSL / ML4; angle reduction ANB.Output. Elimination of a deep bite is possible both with the use of bracket systems and aligners.


1981 ◽  
Vol 11 (4) ◽  
pp. 833-834 ◽  
Author(s):  
David O. Yandle ◽  
Harry V. Wiant Jr.

Estimation of the parameters in the allometric equation by fitting a simple linear regression to the logarithmically transformed variables results in biased estimates of the arithmetic mean. This bias expressed as a percent of the mean approaches the limit −(1 − e−σ2/2) (100) as n increases. An adjusted estimator developed by Finney rather than the one given by Baskerville should be used when s2 is large and n is small. A change of measurement scale of the x or y variables presents no difficulty, but problems arise if variables are transformed to logarithms other than base e.


1996 ◽  
Vol 59 (6) ◽  
pp. 666-669 ◽  
Author(s):  
C. O. GILL ◽  
M. BADONI ◽  
T. JONES

Swab samples were obtained from the surfaces of randomly selected beef carcasses passing through a high-speed dressing process. A single sample was obtained from a randomly selected site on the surface of each selected carcass. Fifty such samples were collected at each of four stages in the process. The aerobic bacteria, coliforms, and Escherichia coli recovered from each sample were enumerated. Values for the mean log units and standard deviations of each set of 50 log values were calculated on the assumption that the log values were normally distributed. The log of the arithmetic mean was estimated from the mean log and standard deviation values for each set. The results show that the average numbers of E. coli, coliforms, and aerobic bacteria which are deposited on carcasses during skinning and evisceration are not reduced by trimming, and that washing approximately halves the average numbers of those bacteria on carcasses. It is concluded that commercial trimming and washing operations are not effective means of decontaminating beef carcasses.


2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Siyu Ji ◽  
Chenglin Wen

Neural network is a data-driven algorithm; the process established by the network model requires a large amount of training data, resulting in a significant amount of time spent in parameter training of the model. However, the system modal update occurs from time to time. Prediction using the original model parameters will cause the output of the model to deviate greatly from the true value. Traditional methods such as gradient descent and least squares methods are all centralized, making it difficult to adaptively update model parameters according to system changes. Firstly, in order to adaptively update the network parameters, this paper introduces the evaluation function and gives a new method to evaluate the parameters of the function. The new method without changing other parameters of the model updates some parameters in the model in real time to ensure the accuracy of the model. Then, based on the evaluation function, the Mean Impact Value (MIV) algorithm is used to calculate the weight of the feature, and the weighted data is brought into the established fault diagnosis model for fault diagnosis. Finally, the validity of this algorithm is verified by the example of UCI-Combined Cycle Power Plant (UCI-ccpp) simulation of standard data set.


1971 ◽  
Vol 8 (3) ◽  
pp. 626-629
Author(s):  
Michael Skalsky

An important problem, arising in connection with the estimation of mathematical expectation of a homogeneous random field X(x1, ···, xn) in Rn by means of the arithmetic mean of observed values, is to determine the number of observations for which the variance of the estimate attains its minimum. Vilenkin [2] has shown, that in the case of a stationary random process X(x) such a finite number exists, provided that the covariance function satisfies certain conditions.


1998 ◽  
Vol 61 (3) ◽  
pp. 329-333 ◽  
Author(s):  
C.O. GILL ◽  
L.P. BAKER

Swab samples were obtained from the surfaces of randomly selected carcasses passing through a sheep carcass-dressing process. A single sample was obtained from a randomly selected site on the surface of each carcass. Twenty-five such samples were collected at each of four stages in the process. The aerobio bacteria, coliforms, and Escherichia coli recovered from each sample were enumerated. Values for the mean log and standard deviation of each set of 25 log10 values were calculated on the assumption that the log values were normally distributed. The log of the arithmetic mean was estimated from the mean log and standard deviation values for each set. The results showed that bacteria, including coliforms that were largely E. coli, were deposited in high numbers during skinning operations, mainly on the butts and shoulders of carcasses. The mean numbers of coliforms and E. coli on carcasses were little affected by eviscerating and trimming operations, although they were redistributed from the sites they occupied after skinning. Total counts were redistributed and augmented by eviscerating and trimming operations. Washing reduced the log numbers of all of the bacteria by approximately 0.5. The general hygienic characteristics of the sheep carcass dressing process were similar to those of a previously examined beef carcass-dressing process.


2009 ◽  
Vol 33 (2) ◽  
pp. 87-90 ◽  
Author(s):  
Douglas Curran-Everett

Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This third installment of Explorations in Statistics investigates confidence intervals. A confidence interval is a range that we expect, with some level of confidence, to include the true value of a population parameter such as the mean. A confidence interval provides the same statistical information as the P value from a hypothesis test, but it circumvents the drawbacks of that hypothesis test. Even more important, a confidence interval focuses our attention on the scientific importance of some experimental result.


Sign in / Sign up

Export Citation Format

Share Document