Statistical analysis of the Neumann-Pirson detection criterion for random quantities obeying a poisson distribution

2015 ◽  
Vol 58 (2) ◽  
pp. 221-228
Author(s):  
M. V. Kosov ◽  
A. V. Chertkov
2005 ◽  
Vol 176 (1) ◽  
pp. 107-120 ◽  
Author(s):  
Guy Caniaux

Abstract The datations of the last eruptive events which have occurred on 13 active volcanic complexes of the Azores are presented. By supposing that these events follow a statistical Poisson distribution, we estimate the occurrence period of these events, as well as the eruption probabilities for the next 300 years. Pico Mountain, Região dos Picos (São Miguel Island), the stratovolcano of Sete Cidades (São Miguel Island), the linear volcanic complexes of São Roque – Piedade (Pico Island) and of Capelo (Faial Island) must be considered as the most active volcanoes of the archipelago. The eruption styles of next eruptions are also specified.


2008 ◽  
Vol 10 (4) ◽  
pp. 043029 ◽  
Author(s):  
Masahito Hayashi ◽  
Akihisa Tomita ◽  
Keiji Matsumoto

Genetics ◽  
1986 ◽  
Vol 113 (4) ◽  
pp. 869-895
Author(s):  
George Lefevre ◽  
William Watkins

ABSTRACT A statistical analysis has been carried out on the distribution and allelism of nearly 500 sex-linked, X-ray-induced, cytologically normal and rearranged lethal mutations in Drosophila melanogaster that were obtained by G. Lefevre. The mutations were induced in four different regions of the X chromosome: (1) 1A1-3E8, (2) 6D1-8A5, (3) 9E1-11A7 and (4) 19A1-20F4, which together comprise more than one-third of the entire chromosome.—The analysis shows that the number of alleles found at different loci does not fit a Poisson distribution, even when the proper procedures are taken to accomodate the truncated nature of the data. However, the allele distribution fits a truncated negative binomial distribution quite well, with cytologically normal mutations fitting better than rearrangement mutations. This indicates that genes are not equimutable, as required for the data to fit a Poisson distribution.—Using the negative binomial parameters to estimate the number of genes that did not produce a detectable lethal mutation in our experiment (n0) gave a larger number than that derived from the use of the Poisson parameter. Unfortunately, we cannot estimate the total numbers of nonvital loci, loci with undetectable phenotypes and loci having extremely low mutabilities. In any event, our estimate of the total vital gene number was far short of the total number of bands in the analyzed regions; yet, in several short intervals, we have found more vital genes than bands; in other intervals, fewer. We conclude that the one-band, one-gene hypothesis, in its literal sense, is not true; furthermore, it is difficult to support, even approximately.—The question of the total gene number in Drosophila will, not doubt, eventually be solved by molecular analyses, not by statistical analysis of mutation data or saturation studies.


1974 ◽  
Vol 7 (3) ◽  
pp. 192-202 ◽  
Author(s):  
Karl Borch

This paper contains little which can be considered as new. It gives a survey of results which have been presented over the last 10-15 years. At one time these results seemed very promising, but in retrospect it is doubtful if they have fulfilled the expectations they raised. In this situation it may be useful to retrace one's steps and see if problems can be reformulated or if new approaches can be found.Mathematical models have been used in insurance for a long time. One of the first was the Gompertz mortality law; a more recent model, which has been intensively studied is the Compound Poisson Distribution in Lundberg's risk theory.When a model is introduced, one usually proceeds by stages. The first step is to see if the model appears acceptable on a priori reasons. If it does, the second step is to examine the implications of the model, to see if any of these are in obvious contradiction with observations. If the result of this examination is satisfactory, the third step is usually a statistical analysis to find out how well the model approximates the situation in real life, which one wants to analyse. If the model passes this second examination, the next and final step may be to estimate the parameters of the model, and use it in practice, i.e. to make decisions in the real world.The advantage of working with a model is that it gives an overall purpose to the collection and analysis of data. A good model should tell us which data we need, and why.


2021 ◽  
Vol 11 (40) ◽  
pp. 126-127
Author(s):  
Maurizio Brizzi ◽  
Daniele Nani ◽  
Lucietta Betti

One of the major criticisms directed to basic research on high dilution effects is the lack of a steady statistical approach; therefore, it seems crucial to fix some milestones in statistical analysis of this kind of experimentation. Since plant research in homeopathy has been recently developed and one of the mostly used models is based on in vitro seed germination, here we propose a statistical approach focused on the Poisson distribution, that satisfactorily fits the number of non-germinated seeds. Poisson distribution is a discrete-valued model often used in statistics when representing the number X of specific events (telephone calls, industrial machine failures, genetic mutations etc.) that occur in a fixed period of time, supposing that instant probability of occurrence of such events is constant. If we denote with λ the average number of events that occur within the fixed period, the probability of observing exactly k events is: P(k) = e-λ λk /k! , k = 0, 1,2,… This distribution is commonly used when dealing with rare effects, in the sense that it has to be almost impossible to have two events at the same time. Poisson distribution is the basic model of the socalled Poisson process, which is a counting process N(t), where t is a time parameter, having these properties: - The process starts with zero: N(0) = 0; - The increments are independent; - The number of events that occur in a period of time d(t) follows a Poisson distribution with parameter proportional to d(t); - The waiting time, i.e. the time between an event and another one, follows and exponential distribution. In a series of experiments performed by our research group ([1], [2]., [3], [4]) we tried to apply this distribution to the number X of non-germinated seeds out of a fixed number N* of seeds in a Petri dish (usually N* = 33 or N* = 36). The goodness-of-fit was checked by different tests (Kolmogorov distance and chi-squared), as well as with the Poissonness plot proposed by Hoaglin [5]. The goodness-of-fit of Poisson distribution allows to use specific tests, like the global Poisson test (based on a chi-squared statistics) and the comparison of two Poisson parameters, based on the statistic z = X1–X2 / (X1+X2)1/2 which is, for large samples (at least 20 observations) approximately standard normally distributed. A very clear review of these tests based on Poisson distribution is given in [6]. This good fit of Poisson distribution suggests that the whole process of germination of wheat seeds may be considered as a non-homogeneous Poisson process, where the germination rate is not constant but changes over time. Keywords: Poisson process, counting variable, goodness-of-fit, wheat germination References [1] L.Betti, M.Brizzi, D.Nani, M.Peruzzi. A pilot statistical study with homeopathic potencies of Arsenicum Album in wheat germination as a simple model. British Homeopathic Journal; 83: 195-201. [2] M.Brizzi, L.Betti (1999), Using statistics for evaluating the effectiveness of homeopathy. Analysis of a large collection of data from simple plant models. III Congresso Nazionale della SIB (Società Italiana di Biometria) di Roma, Abstract Book, 74-76. [3] M.Brizzi, D.Nani, L.Betti, M.Peruzzi. Statistical analysis of the effect of high dilutions of Arsenic in a large dataset from a wheat germination model. British Homeopathic Journal, 2000;, 89, 63-67. [4] M.Brizzi, L.Betti (2010), Statistical tools for alternative research in plant experiments. “Metodološki Zvezki – Advances in Methodology and Statistics”, 7, 59-71. [5] D.C.Hoaglin (1980), A Poissonness plot. “The American Statistician”, 34, 146-149. [6] L.Sachs (1984) Applied statistics. A handbook of techniques. Springer Verlag, 186-189.


2001 ◽  
Vol 01 (03) ◽  
pp. L147-L153 ◽  
Author(s):  
JANUSZ SMULKO ◽  
CLAES-GÖRAN GRANQVIST ◽  
LASZLO B. KISH

Resistance noise data from a single gas sensor can be utilized to identify gas mixtures. We calculated the power spectral density. higher order probability densities and the bispectrum function of the recorded noise samples; these functions are sensitive to different natural vapors and can be employed to select a proper detection criterion for gas composites and odors.


1966 ◽  
Vol 24 ◽  
pp. 188-189
Author(s):  
T. J. Deeming

If we make a set of measurements, such as narrow-band or multicolour photo-electric measurements, which are designed to improve a scheme of classification, and in particular if they are designed to extend the number of dimensions of classification, i.e. the number of classification parameters, then some important problems of analytical procedure arise. First, it is important not to reproduce the errors of the classification scheme which we are trying to improve. Second, when trying to extend the number of dimensions of classification we have little or nothing with which to test the validity of the new parameters.Problems similar to these have occurred in other areas of scientific research (notably psychology and education) and the branch of Statistics called Multivariate Analysis has been developed to deal with them. The techniques of this subject are largely unknown to astronomers, but, if carefully applied, they should at the very least ensure that the astronomer gets the maximum amount of information out of his data and does not waste his time looking for information which is not there. More optimistically, these techniques are potentially capable of indicating the number of classification parameters necessary and giving specific formulas for computing them, as well as pinpointing those particular measurements which are most crucial for determining the classification parameters.


Author(s):  
Gianluigi Botton ◽  
Gilles L'espérance

As interest for parallel EELS spectrum imaging grows in laboratories equipped with commercial spectrometers, different approaches were used in recent years by a few research groups in the development of the technique of spectrum imaging as reported in the literature. Either by controlling, with a personal computer both the microsope and the spectrometer or using more powerful workstations interfaced to conventional multichannel analysers with commercially available programs to control the microscope and the spectrometer, spectrum images can now be obtained. Work on the limits of the technique, in terms of the quantitative performance was reported, however, by the present author where a systematic study of artifacts detection limits, statistical errors as a function of desired spatial resolution and range of chemical elements to be studied in a map was carried out The aim of the present paper is to show an application of quantitative parallel EELS spectrum imaging where statistical analysis is performed at each pixel and interpretation is carried out using criteria established from the statistical analysis and variations in composition are analyzed with the help of information retreived from t/γ maps so that artifacts are avoided.


Sign in / Sign up

Export Citation Format

Share Document