scholarly journals Cramer-Rao-type Bound and Stam's Inequality for Discrete Random Variables

2019 ◽  
Author(s):  
Tomohiro Nishiyama

The variance and the entropy power of a continuous random variable are bounded from below by the reciprocal of its Fisher information through the Cram\'{e}r-Rao bound and the Stam's inequality respectively. In this note, we introduce the Fisher information for discrete random variables and derive the discrete Cram\'{e}r-Rao-type bound and the discrete Stam's inequality.

1975 ◽  
Vol 7 (4) ◽  
pp. 830-844 ◽  
Author(s):  
Lajos Takács

A sequence of random variables η0, η1, …, ηn, … is defined by the recurrence formula ηn = max (ηn–1 + ξn, 0) where η0 is a discrete random variable taking on non-negative integers only and ξ1, ξ2, … ξn, … is a semi-Markov sequence of discrete random variables taking on integers only. Define Δ as the smallest n = 1, 2, … for which ηn = 0. The random variable ηn can be interpreted as the content of a dam at time t = n(n = 0, 1, 2, …) and Δ as the time of first emptiness. This paper deals with the determination of the distributions of ηn and Δ by using the method of matrix factorisation.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter focuses on probability mass functions. One of the primary uses of Bayesian inference is to estimate parameters. To do so, it is necessary to first build a good understanding of probability distributions. This chapter introduces the idea of a random variable and presents general concepts associated with probability distributions for discrete random variables. It starts off by discussing the concept of a function and goes on to describe how a random variable is a type of function. The binomial distribution and the Bernoulli distribution are then used as examples of the probability mass functions (pmf’s). The pmfs can be used to specify prior distributions, likelihoods, likelihood profiles and/or posterior distributions in Bayesian inference.


2020 ◽  
pp. 168-173
Author(s):  
Аалиева Бурул

Аннотация: Бөлүштүрүү функциясын, үзгүлтүксүз кокус чоңдуктардын ыктымалдуулуктарын бѳлүштүрүүнүн жиктелиш функциясы (ыктымалдуулуктун тыгыздыгы), ыктымалдуулуктарды бир калыпта бѳлуштүрүү законун аныктоо. Бөлүштүрүү функциясынын касиеттерин окутуу, далилдөө. X кокус чоңдугунун кабыл алууга мүмкүн болгон маанилери (a,b) интервалында жаткандыгынын ыктымалдуулугу бөлүштүрүү функциясынын өсүндүсүнө барабар. Түйүндүү сѳздѳр: Бөлүштурүү функциясы, үзгүлтүксүз кокус чоңдуктардын ыктымалдуулуктары, дискреттик кокус чоңдук, бөлүштүрүүнүн интегралдык функциясы, баштапкы функция. Аннотация: Определять вид непрерывной случайной величины, находить вероятность попадания случайной величины в заданный интервал по заданной функции распределения, уметь находить плотность распределения и равномерное распределения. Еще одно отличие характеристики случайных величин непрерывного действия-включение функции классификации распределения вероятностей, обнаружение первого производного функции последовательности. Следовательно, характеристика распределения вероятностей дискретных случайных величин. Свойства функции распределения обучения и доказательства. Х может быть, чтобы принять параметры диапазона значений (а, б), что функция распределения вероятностей равна приращению. Ключевые слова: Функция распределения, вероятность непрерывной случайной величины, дискретная случайная величина, интегральная функция распределения, первообразная. Annotation: Determine the type of random variable, find the probability of a random variable falling into a given interval by a given distribution function, be able to find the distribution density and uniform distribution. Properties of learning distribution function and evidence. X maybe to take the parameters of the range of values (a, b), that the probability distribution function is equal to the increment. Another difference in the characterization of continuous random variables is the inclusion of the classification function of the probability distribution, the detection of the first derivative of the sequence function. Hence, the characteristic of the probability distribution of discrete random variables Non-decreasing functions, ∫ _ (- ∞) ^ ∞▒ 〖P (x) ax = 1〗. In the case of an individual, if the values of a random variable (a, b) are located within ∫_a ^ b▒ 〖P (x) ax = 1〗 Keywords: Distribution function, probability of continuous random variable, discrete random variable, integral distribution function, antiderivative. DOI: 10.35254/bhu.2019.50.1 ВЕСТНИК БИШКЕКСКОГО ГОСУДАРСТВЕННОГО УНИВЕРСИТЕТА. No4(50) 2019 169 Аннотация: Бөлүштүрүү функциясын, үзгүлтүксүз кокус чоңдуктардын ыктымалдуулуктарын бѳлүштүрүүнүн жиктелиш функциясы (ыктымалдуулуктун тыгыздыгы), ыктымалдуулуктарды бир калыпта бѳлуштүрүү законун аныктоо. Бөлүштүрүү функциясынын касиеттерин окутуу, далилдөө. X кокус чоңдугунун кабыл алууга мүмкүн болгон маанилери (a,b) интервалында жаткандыгынын ыктымалдуулугу бөлүштүрүү функциясынын өсүндүсүнө барабар. X кокус чондугу PP(xx < xx1) ыктымалдуулукта x ден кичине маанилерди кабыл алат; X кокус чондугу xx1 ≤ xx < xx2барабарсыздыктын ыктымалдуулугу PP(xx1 ≤ xx < xx2) түрүндө канааттандырат. Үзгүлтүксүз кокус чоңдуктарды мүнөздөөнүн дагы бир башкача жолу ыктымалдуулукту бөлүштүрүүнүн жиктелиш функциясын киргизүү, тутамдык функциясынын биринчи туундусун табуу. Демек,тутамдык функция жиктелиш функциясынын баштапкы функциясы болорун, дискреттик кокус чондуктардын ыктымалдуулуктарынын бөлүштүрүүсүн мунөздөө. Жиктелиш функциясы кемибөөчү функция, ∫ ff(xx)dddd = 1 ∞ −∞ . Жекече учурда, эгерде кокус чоңдуктардын мүмкүн болгон маанилери (a,b) аралыгында жайгашса, анда � ff(xx)dddd = 1 bb aa Түйүндүү сѳздѳр: Бөлүштурүү функциясы, үзгүлтүксүз кокус чоңдуктардын ыктымалдуулуктары, дискреттик кокус чоңдук, бөлүштүрүүнүн интегралдык функциясы, баштапкы функция. Аннотация: Определять вид непрерывной случайной величины, находить вероятность попадания случайной величины в заданный интервал по заданной функции распределения, уметь находить плотность распределения и равномерное распределения. Еще одно отличие характеристики случайных величин непрерывного действия-включение функции классификации распределения вероятностей, обнаружение первого производного функции последовательности. Следовательно, характеристика распределения вероятностей дискретных случайных величин. Ключевые слова: Функция распределения, вероятность непрерывной случайной величины, дискретная случайная величина, интегральная функция распределения, первообразная. Annotation: Determine the type of random variable, find the probability of a random variable falling into a given interval by a given distribution function, be able to find the distribution density and uniform distribution. Properties of learning distribution function and evidence. X maybe to take the parameters of the range of values (a, b), that the probability distribution function is equal to the increment. Another difference in the characterization of continuous random variables is the inclusion of the classification function of the probability distribution, the detection of the first derivative of the sequence function. Keywords: Distribution function, probability of continuous random variable, discrete random variable, integral distribution function, antiderivative.


2021 ◽  
pp. 109-124
Author(s):  
Timothy E. Essington

The chapter “Random Variables and Probability” serves as both a review and a reference on probability. The random variable is the core concept in understanding probability, parameter estimation, and model selection. This chapter reviews the basic idea of a random variable and discusses the two main kinds of random variables: discrete random variables and continuous random variables. It covers the distinction between discrete and continuous random variables and outlines the most common probability mass or density functions used in ecology. Advanced sections cover distributions such as the gamma distribution, Student’s t-distribution, the beta distribution, the beta-binomial distribution, and zero-inflated models.


2020 ◽  
Author(s):  
Ahmad Sudi Pratikno

Probability to learn someone's chance in getting or winning an event. In the discrete random variable is more identical to repeated experiments, to form a pattern. Discrete random variables can be calculated as the probability distribution by calculating each value that might get a certain probability value.


1975 ◽  
Vol 7 (04) ◽  
pp. 830-844
Author(s):  
Lajos Takács

A sequence of random variables η 0, η 1, …, ηn , … is defined by the recurrence formula ηn = max (η n–1 + ξn , 0) where η 0 is a discrete random variable taking on non-negative integers only and ξ 1, ξ 2, … ξn , … is a semi-Markov sequence of discrete random variables taking on integers only. Define Δ as the smallest n = 1, 2, … for which ηn = 0. The random variable ηn can be interpreted as the content of a dam at time t = n(n = 0, 1, 2, …) and Δ as the time of first emptiness. This paper deals with the determination of the distributions of ηn and Δ by using the method of matrix factorisation.


Stats ◽  
2019 ◽  
Vol 2 (3) ◽  
pp. 371-387
Author(s):  
Peter Zörnig

The popular concept of slash distribution is generalized by considering the quotient Z = X/Y of independent random variables X and Y, where X is any continuous random variable and Y has a general beta distribution. The density of Z can usually be expressed by means of generalized hypergeometric functions. We study the distribution of Z for various parent distributions of X and indicate a possible application in finance.


Author(s):  
Abraham Nitzan

This chapter reviews some subjects in mathematics and physics that are used in different contexts throughout this book. The selection of subjects and the level of their coverage reflect the author’s perception of what potential users of this text were exposed to in their earlier studies. Therefore, only brief overview is given of some subjects while somewhat more comprehensive discussion is given of others. In neither case can the coverage provided substitute for the actual learning of these subjects that are covered in detail by many textbooks. A random variable is an observable whose repeated determination yields a series of numerical values (“realizations” of the random variable) that vary from trial to trial in a way characteristic of the observable. The outcomes of tossing a coin or throwing a die are familiar examples of discrete random variables. The position of a dust particle in air and the lifetime of a light bulb are continuous random variables. Discrete random variables are characterized by probability distributions; Pn denotes the probability that a realization of the given random variable is n. Continuous random variables are associated with probability density functions P(x): P(x1)dx denotes the probability that the realization of the variable x will be in the interval x1 . . . x1+dx.


1992 ◽  
Vol 42 (1-2) ◽  
pp. 125-128 ◽  
Author(s):  
Sudhakar Kunte ◽  
R.N. Rattihalli

Two methods of generating a random variable following a uniform distribution over [0, 1] on the basis of sequences of i.i.d. discrete random variables are discussed.


Author(s):  
Oren Fivel ◽  
Moshe Klein ◽  
Oded Maimon

In this paper we develop the foundation of a new theory for decision trees based on new modeling of phenomena with soft numbers. Soft numbers represent the theory of soft logic that addresses the need to combine real processes and cognitive ones in the same framework. At the same time soft logic develops a new concept of modeling and dealing with uncertainty: the uncertainty of time and space. It is a language that can talk in two reference frames, and also suggest a way to combine them. In the classical probability, in continuous random variables there is no distinguishing between the probability involving strict inequality and non-strict inequality. Moreover, a probability involves equality collapse to zero, without distinguishing among the values that we would like that the random variable will have for comparison. This work presents Soft Probability, by incorporating of Soft Numbers into probability theory. Soft Numbers are set of new numbers that are linear combinations of multiples of ”ones” and multiples of ”zeros”. In this work, we develop a probability involving equality as a ”soft zero” multiple of a probability density function (PDF). We also extend this notion of soft probabilities to the classical definitions of Complements, Unions, Intersections and Conditional probabilities, and also to the expectation, variance and entropy of a continuous random variable, condition being in a union of disjoint intervals and a discrete set of numbers. This extension provides information regarding to a continuous random variable being within discrete set of numbers, such that its probability does not collapse completely to zero. When we developed the notion of soft entropy, we found potentially another soft axis, multiples of 0log(0), that motivates to explore the properties of those new numbers and applications. We extend the notion of soft entropy into the definition of Cross Entropy and Kullback–Leibler-Divergence (KLD), and we found that a soft KLD is a soft number, that does not have a multiple of 0log(0). Based on a soft KLD, we defined a soft mutual information, that can be used as a splitting criteria in decision trees with data set of continuous random variables, consist of single samples and intervals.


Sign in / Sign up

Export Citation Format

Share Document