scholarly journals Composite Tests under Corrupted Data

Entropy ◽  
2019 ◽  
Vol 21 (1) ◽  
pp. 63 ◽  
Author(s):  
Michel Broniatowski ◽  
Jana Jurečková ◽  
Ashok Moses ◽  
Emilie Miranda

This paper focuses on test procedures under corrupted data. We assume that the observations Z i are mismeasured, due to the presence of measurement errors. Thus, instead of Z i for i = 1 , … , n, we observe X i = Z i + δ V i, with an unknown parameter δ and an unobservable random variable V i. It is assumed that the random variables Z i are i.i.d., as are the X i and the V i. The test procedure aims at deciding between two simple hyptheses pertaining to the density of the variable Z i, namely f 0 and g 0. In this setting, the density of the V i is supposed to be known. The procedure which we propose aggregates likelihood ratios for a collection of values of δ. A new definition of least-favorable hypotheses for the aggregate family of tests is presented, and a relation with the Kullback-Leibler divergence between the sets f δ δ and g δ δ is presented. Finite-sample lower bounds for the power of these tests are presented, both through analytical inequalities and through simulation under the least-favorable hypotheses. Since no optimality holds for the aggregation of likelihood ratio tests, a similar procedure is proposed, replacing the individual likelihood ratio by some divergence based test statistics. It is shown and discussed that the resulting aggregated test may perform better than the aggregate likelihood ratio procedure.

2003 ◽  
Vol 15 (10) ◽  
pp. 2307-2337 ◽  
Author(s):  
Zhiyi Chi ◽  
Peter L. Rauske ◽  
Daniel Margoliash

The detection of patterned spiking activity is important in the study of neural coding. A pattern filtering approach is developed for pattern detection under the framework of point processes, which offers flexibility in combining temporal details and firing rates. The detection combines multiple steps of filtering in a coarse-to-fine manner. Under some conditional Poisson assumptions on the spiking activity, each filtering step is equivalent to classifying by likelihood ratios all the data segments as targets or as background sequences. Unlike previous studies, where global surrogate data were used to evaluate the statistical significance of the detected patterns, a localizedp-test procedure is developed, which better accounts for firing modulation and nonstationarity in spiking activity. Common temporal structures of patterned activity are learned using an entropy-based alignment procedure, without relying on metrics or pair-wise alignment. Applications of pattern filtering to single, presumptive interneurons recorded in the nucleus HVc of zebra finch are illustrated. These demonstrate a match between the auditory-evoked response to playback of the individual bird's own song and spontaneous activity during sleep. Small temporal compression or expansion, or both, is required for optimal matching of spontaneous patterns to stimulus-evoked activity.


1994 ◽  
Vol 31 (03) ◽  
pp. 595-605 ◽  
Author(s):  
Paul Joyce

The stationary distribution for the population frequencies under an infinite alleles model is described as a random sequence (x 1, x 2, · ··) such that Σxi = 1. Likelihood ratio theory is developed for random samples drawn from such populations. As a result of the theory, it is shown that any parameter distinguishing an infinite alleles model with selection from the neutral infinite alleles model cannot be consistently estimated based on gene frequencies at a single locus. Furthermore, the likelihood ratio (neutral versus selection) converges to a non-trivial random variable under both hypotheses. This shows that if one wishes to test a completely specified infinite alleles model with selection against neutrality, the test will not obtain power 1 in the limit.


2016 ◽  
Author(s):  
Nick H Barton ◽  
Alison M Etheridge ◽  
Amandine Véber

Our focus here is on the infinitesimal model. In this model, one or several quantitative traits are described as the sum of a genetic and a non-genetic component, the first being distributed as a normal random variable centred at the average of the parental genetic components, and with a variance independent of the parental traits. We first review the long history of the infinitesimal model in quantitative genetics. Then we provide a definition of the model at the phenotypic level in terms of individual trait values and relationships between individuals, but including different evolutionary processes: genetic drift, recombination, selection, mutation, population structure, ... We give a range of examples of its application to evolutionary questions related to stabilising selection, assortative mating, effective population size and response to selection, habitat preference and speciation. We provide a mathematical justification of the model as the limit as the number M of underlying loci tends to infinity of a model with Mendelian inheritance, mutation and environmental noise, when the genetic component of the trait is purely additive. We also show how the model generalises to include epistatic effects. In each case, by conditioning on the pedigree relating individuals in the population, we incorporate arbitrary selection and population structure. We suppose that we can observe the pedigree up to the present generation, together with all the ancestral traits, and we show, in particular, that the genetic components of the individual trait values in the current generation are indeed normally distributed with a variance independent of ancestral traits, up to an error of order M^{-1/2}. Simulations suggest that in particular cases the convergence may be as fast as 1/M.


Author(s):  
Demin Nalic ◽  
Aleksa Pandurevic ◽  
Arno Eichberger ◽  
Branko Rogic

The increasingly used approach of combining different simulation software for testing of automated driving systems (ADS) increases the need for potential and convenient software designs. Recently developed co-simulation platforms (CSP) provide the possibility to cover the high demand on testing kilometres for ADS by combining vehicle simulation with traffic flow simulation software (TFSS) environments. Having chosen a suitable CSP rises up the question how the test procedures should be defined and constructed and what are the relevant test scenarios. Parameters of the ADS in vehicle simulation, traffic parameter in TFSS and combination of all these can be used for the definition of test scenarios. Thus the automation of a process, consisting of vehicle and traffic parameters and a suitable CSP, a test procedure for ADS should be well designed and implemented. This paper presents the design and implementation of a complex co-simulation framework for virtual ADS testing combining IPG CarMaker and PTV Vissim.


Author(s):  
Manfred Schaaf ◽  
Friedrich Schoeckle ◽  
Jaroslav Bartonicek

The last years there has been a great effort on research and development on gasket testing for bolted joints in Europe and in North America. In Europe, a new standard for the calculation of flanged joints (EN 1591) was developed by the Technical Committee TC 74 of the European Committee for Standardization (CEN). This standard requires gasket factors which must be determined in accordance to the testing standard EN 13555. In North America, the ASTM Committee F03 on Gaskets was established to implement PVRC developed gasket test procedures in the code. Since many companies are operating worldwide, there is an interest in “harmonized” gasket testing procedures to minimize the costs and to raise the effectivity of the tests performed. Several information exchange meetings on gasket constants and gasket testing have been held, and there have been many discussions concerning the difference between the European and the American test procedures. Up to now, only the test procedure for leakage tests has been “harmonized”. Although there are still some differences in detail, the European gasket constants as well as the PVRC parameter can be determined with the new definition of the test procedure, theoretically. In this paper some tests are evaluated in both ways, the results show some mismatch. More tests (with several gasket materials) are necessary to prove the reliability of this procedure.


1994 ◽  
Vol 31 (3) ◽  
pp. 595-605 ◽  
Author(s):  
Paul Joyce

The stationary distribution for the population frequencies under an infinite alleles model is described as a random sequence (x1, x2, · ··) such that Σxi = 1. Likelihood ratio theory is developed for random samples drawn from such populations. As a result of the theory, it is shown that any parameter distinguishing an infinite alleles model with selection from the neutral infinite alleles model cannot be consistently estimated based on gene frequencies at a single locus. Furthermore, the likelihood ratio (neutral versus selection) converges to a non-trivial random variable under both hypotheses. This shows that if one wishes to test a completely specified infinite alleles model with selection against neutrality, the test will not obtain power 1 in the limit.


Entropy ◽  
2018 ◽  
Vol 20 (12) ◽  
pp. 966 ◽  
Author(s):  
Michel Broniatowski ◽  
Jana Jurečková ◽  
Jan Kalina

We consider the likelihood ratio test of a simple null hypothesis (with density f 0 ) against a simple alternative hypothesis (with density g 0 ) in the situation that observations X i are mismeasured due to the presence of measurement errors. Thus instead of X i for i = 1 , … , n , we observe Z i = X i + δ V i with unobservable parameter δ and unobservable random variable V i . When we ignore the presence of measurement errors and perform the original test, the probability of type I error becomes different from the nominal value, but the test is still the most powerful among all tests on the modified level. Further, we derive the minimax test of some families of misspecified hypotheses and alternatives. The test exploits the concept of pseudo-capacities elaborated by Huber and Strassen (1973) and Buja (1986). A numerical experiment illustrates the principles and performance of the novel test.


Author(s):  
Oren Fivel ◽  
Moshe Klein ◽  
Oded Maimon

In this paper we develop the foundation of a new theory for decision trees based on new modeling of phenomena with soft numbers. Soft numbers represent the theory of soft logic that addresses the need to combine real processes and cognitive ones in the same framework. At the same time soft logic develops a new concept of modeling and dealing with uncertainty: the uncertainty of time and space. It is a language that can talk in two reference frames, and also suggest a way to combine them. In the classical probability, in continuous random variables there is no distinguishing between the probability involving strict inequality and non-strict inequality. Moreover, a probability involves equality collapse to zero, without distinguishing among the values that we would like that the random variable will have for comparison. This work presents Soft Probability, by incorporating of Soft Numbers into probability theory. Soft Numbers are set of new numbers that are linear combinations of multiples of ”ones” and multiples of ”zeros”. In this work, we develop a probability involving equality as a ”soft zero” multiple of a probability density function (PDF). We also extend this notion of soft probabilities to the classical definitions of Complements, Unions, Intersections and Conditional probabilities, and also to the expectation, variance and entropy of a continuous random variable, condition being in a union of disjoint intervals and a discrete set of numbers. This extension provides information regarding to a continuous random variable being within discrete set of numbers, such that its probability does not collapse completely to zero. When we developed the notion of soft entropy, we found potentially another soft axis, multiples of 0log(0), that motivates to explore the properties of those new numbers and applications. We extend the notion of soft entropy into the definition of Cross Entropy and Kullback–Leibler-Divergence (KLD), and we found that a soft KLD is a soft number, that does not have a multiple of 0log(0). Based on a soft KLD, we defined a soft mutual information, that can be used as a splitting criteria in decision trees with data set of continuous random variables, consist of single samples and intervals.


1974 ◽  
Vol 32 (02/03) ◽  
pp. 483-491
Author(s):  
E. A Loeliger ◽  
M. J Boekhout-Mussert ◽  
L. P van Halem-Visser ◽  
J. D. E Habbema ◽  
H de Jonge

SummaryThe present study concerned the reproducibility of the so-called prothrombin time as assessed with a series of more commonly used modifications of the Quick’s onestage assay procedure, i.e. the British comparative reagent, homemade human brain thromboplastin, Simplastin, Simplastin A, and Thrombotest. All five procedures were tested manually on pooled lyophilized normal and patients’ plasmas. In addition, Simplastin A and Thrombotest were investigated semiautomatically on individual freshly prepared patients’ plasmas. From the results obtained, the following conclusions may be drawn :The reproducibility of results obtained with manual reading on lyophilized plasmas is satisfactory for all five test procedures. For Simplastin, the reproducibility of values in the range of insufficient anticoagulation is relatively low due to the low discrimination power of the test procedure in the near-normal range (so-called low sensitivity of rabbit brain thromboplastins). The reproducibility of Thrombotest excels as a consequence of its particularly easily discerned coagulation endpoint.The reproducibility of Thrombotest, when tested on freshly prepared plasmas using Schnitger’s semiautomatic coagulometer (a fibrinometer-liJce apparatus), is no longer superior to that of Simplastin A.The constant of proportionality between the coagulation times formed with Simplastin A and Thrombotest was estimated at 0.64.Reconstituted Thrombotest is stable for 24 hours when stored at 4° C, whereas reconstituted Simplastin A is not.The Simplastin A method and Thrombotest seem to be equally sensitive to “activation” of blood coagulation upon storage.


2013 ◽  
Vol 35 (2) ◽  
pp. 165-187
Author(s):  
E. S. Burt

Why does writing of the death penalty demand the first-person treatment that it also excludes? The article investigates the role played by the autobiographical subject in Derrida's The Death Penalty, Volume I, where the confessing ‘I’ doubly supplements the philosophical investigation into what Derrida sees as a trend toward the worldwide abolition of the death penalty: first, to bring out the harmonies or discrepancies between the individual subject's beliefs, anxieties, desires and interests with respect to the death penalty and the state's exercise of its sovereignty in applying it; and second, to provide a new definition of the subject as haunted, as one that has been, but is no longer, subject to the death penalty, in the light of the worldwide abolition currently underway.


Sign in / Sign up

Export Citation Format

Share Document