uniform probability
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 10)

H-INDEX

10
(FIVE YEARS 0)

Author(s):  
Lan K. Nguyen ◽  
Duy H. N. Nguyen ◽  
Nghi H. Tran ◽  
Clayton Bosler ◽  
David Brunnenmeyer
Keyword(s):  

Synthese ◽  
2021 ◽  
Author(s):  
Rush T. Stewart

AbstractEpistemic states of uncertainty play important roles in ethical and political theorizing. Theories that appeal to a “veil of ignorance,” for example, analyze fairness or impartiality in terms of certain states of ignorance. It is important, then, to scrutinize proposed conceptions of ignorance and explore promising alternatives in such contexts. Here, I study Lerner’s probabilistic egalitarian theorem in the setting of imprecise probabilities. Lerner’s theorem assumes that a social planner tasked with distributing income to individuals in a population is “completely ignorant” about which utility functions belong to which individuals. Lerner models this ignorance with a certain uniform probability distribution, and shows that, under certain further assumptions, income should be equally distributed. Much of the criticism of the relevance of Lerner’s result centers on the representation of ignorance involved. Imprecise probabilities provide a general framework for reasoning about various forms of uncertainty including, in particular, ignorance. To what extent can Lerner’s conclusion be maintained in this setting?


Author(s):  
Andrea Berdondini

ABSTRACT: This article describes an optimization method concerning entropy encoding applicable to a source of independent and identically-distributed random variables. The algorithm can be explained with the following example: let us take a source of i.i.d. random variables X with uniform probability density and cardinality 10. With this source, we generate messages of length 1000 which will be encoded in base 10. We call XG the set containing all messages that can be generated from the source. According to Shannon's first theorem, if the average entropy of X, calculated on the set XG, is H(X)≈0.9980, the average length of the encoded messages will be 1000* H(X)=998. Now, we increase the length of the message by one and calculate the average entropy concerning the 10% of the sequences of length 1001 having less entropy. We call this set XG10. The average entropy of X10, calculated on the XG10 set, is H(X10)≈0.9964, consequently, the average length of the encoded messages will be 1001* H(X10)=997.4 . Now, we make the difference between the average length of the encoded sequences belonging to the two sets ( XG and XG10) 998.0-997.4 = 0.6. Therefore, if we use the XG10 set, we reduce the average length of the encoded message by 0.6 values ​​in base ten. Consequently, the average information per symbol becomes 997.4/1000=0.9974, which turns out to be less than the average entropy of X H(X)≈0.998. We can use the XG10 set instead of the X10 set, because we can create a biunivocal correspondence between all the possible sequences generated by our source and ten percent of the sequences with less entropy of the messages having length 1001. In this article, we will show that this transformation can be performed by applying random variations on the sequences generated by the source.


2021 ◽  
Vol 19 (3) ◽  
Author(s):  
PER NILSSON

This study examines informal hypothesis testing in the context of drawing inferences of underlying probability distributions. Through a small-scale teaching experiment of three lessons, the study explores how fifth-grade students distinguish a non-uniform probability distribution from uniform probability distributions in a data-rich learning environment, and what role processes of data production play in their investigations. The study outlines aspects of students’ informal understanding of hypothesis testing. It shows how students with no formal education can follow the logic that a small difference in samples can be the effect of randomness, while a large difference implies a real difference in the underlying process. The students distinguish the mode and the size of differences in frequencies as signals in data and used these signals to give data-based reasons in processes of informal hypothesis testing. The study also highlights the role of data production and points to a need for further research on the role of data production in an informal approach to the teaching and learning of statistical inference. First published December 2020 at Statistics Education Research Journal: Archives


Author(s):  
Hamish McManus ◽  
Denton Callander ◽  
Jason Asselin ◽  
James McMahon ◽  
Jennifer F Hoy ◽  
...  

Abstract Ambitious World Health Organization targets for disease elimination require monitoring of epidemics using routine health data in settings of decreasing and low incidence. We evaluated two methods commonly applied to routine testing results to estimate incidence rates that assume uniform probability of infection between consecutive negative and positive tests based on: 1. the midpoint of this interval; and 2. a randomly selected point on this interval. We compared these with an approximation to the Poisson-binomial distribution which assigns partial incidence to time-periods based on the uniform probability of occurrence in these intervals. We assessed bias, variance and convergence of estimates using simulations of Weibull distributed failure times with systematically varied baseline incidence, and varying trend. We considered results for quarterly, half-yearly and yearly incidence estimation frequencies. We applied methods to assess human immunodeficiency virus (HIV) incidence in HIV-negative patients from the Treatment with Antiretrovirals and their Impact on Positive And Negative men study between 2012 and 2018. The Poisson-binomial method had reduced bias and variance at low levels of incidence and for increased estimation frequency, with increased consistency of estimation. Application of methods to real-world assessment of HIV incidence found decreased variance in Poisson-binomial model estimates, with observed incidence declining to levels where simulation results had indicated bias in midpoint and random-point methods.


2020 ◽  
Vol 19 (3) ◽  
Author(s):  
PER NILSSON

This study examines informal hypothesis testing in the context of drawing inferences of underlying probability distributions. Through a small-scale teaching experiment of three lessons, the study explores how fifth-grade students distinguish a non-uniform probability distribution from uniform probability distributions in a data-rich learning environment, and what role processes of data production play in their investigations. The study outlines aspects of students’ informal understanding of hypothesis testing. It shows how students with no formal education can follow the logic that a small difference in samples can be the effect of randomness, while a large difference implies a real difference in the underlying process. The students distinguish the mode and the size of differences in frequencies as signals in data and used these signals to give data-based reasons in processes of informal hypothesis testing. The study also highlights the role of data production and points to a need for further research on the role of data production in an informal approach to the teaching and learning of statistical inference. First published December 2020 at Statistics Education Research Journal: Archives


2020 ◽  
Vol 67 (12) ◽  
pp. 3372-3376
Author(s):  
Holger Mandry ◽  
Andreas Herkle ◽  
Sven Muelich ◽  
Joachim Becker ◽  
Robert F. H. Fischer ◽  
...  

2020 ◽  
Author(s):  
Yanze Huang ◽  
Limei Lin ◽  
Li Xu

Abstract As the size of a multiprocessor system grows, the probability that faults occur in this system increases. One measure of the reliability of a multiprocessor system is the probability that a fault-free subsystem of a certain size still exists with the presence of individual faults. In this paper, we use the probabilistic fault model to establish the subgraph reliability for $AG_n$, the $n$-dimensional alternating group graph. More precisely, we first analyze the probability $R_n^{n-1}(p)$ that at least one subgraph with dimension $n-1$ is fault-free in $AG_n$, when given a uniform probability of a single vertex being fault-free. Since subgraphs of $AG_n$ intersect in rather complicated manners, we resort to the principle of inclusion–exclusion by considering intersections of up to five subgraphs and obtain an upper bound of the probability. Then we consider the probabilistic fault model when the probability of a single vertex being fault-free is nonuniform, and we show that the upper bound under these two models is very close to the lower bound obtained in a previous result, and it is better than the upper bound deduced from that of the arrangement graph, which means that the upper bound we obtained is very tight.


Sign in / Sign up

Export Citation Format

Share Document