Introduction to Probability and Random Variables

Author(s):  
M. Vidyasagar

This chapter provides an introduction to probability and random variables. Probability theory is an attempt to formalize the notion of uncertainty in the outcome of an experiment. For instance, suppose an urn contains four balls, colored red, blue, white, and green respectively. Suppose we dip our hand in the urn and pull out one of the balls “at random.” What is the likelihood that the ball we pull out will be red? The chapter first defines a random variable and probability before discussing the function of a random variable and expected value. It then considers total variation distance, joint and marginal probability distributions, independence and conditional probability distributions, Bayes' rule, and maximum likelihood estimates. Finally, it describes random variables assuming infinitely many values, focusing on Markov and Chebycheff inequalities, Hoeffding's inequality, Monte Carlo simulation, and Cramér's theorem.

2002 ◽  
Vol 34 (03) ◽  
pp. 609-625 ◽  
Author(s):  
N. Papadatos ◽  
V. Papathanasiou

The random variablesX1,X2, …,Xnare said to be totally negatively dependent (TND) if and only if the random variablesXiand ∑j≠iXjare negatively quadrant dependent for alli. Our main result provides, for TND 0-1 indicatorsX1,x2, …,Xnwith P[Xi= 1] =pi= 1 - P[Xi= 0], an upper bound for the total variation distance between ∑ni=1Xiand a Poisson random variable with mean λ ≥ ∑ni=1pi. An application to a generalized birthday problem is considered and, moreover, some related results concerning the existence of monotone couplings are discussed.


2018 ◽  
Vol 47 (2) ◽  
pp. 53-67 ◽  
Author(s):  
Jalal Chachi

In this paper, rst a new notion of fuzzy random variables is introduced. Then, usingclassical techniques in Probability Theory, some aspects and results associated to a randomvariable (including expectation, variance, covariance, correlation coecient, etc.) will beextended to this new environment. Furthermore, within this framework, we can use thetools of general Probability Theory to dene fuzzy cumulative distribution function of afuzzy random variable.


2016 ◽  
Vol 24 (1) ◽  
pp. 29-41 ◽  
Author(s):  
Roman Frič ◽  
Martin Papčo

Abstract The influence of “Grundbegriffe” by A. N. Kolmogorov (published in 1933) on education in the area of probability and its impact on research in stochastics cannot be overestimated. We would like to point out three aspects of the classical probability theory “calling for” an upgrade: (i) classical random events are black-and-white (Boolean); (ii) classical random variables do not model quantum phenomena; (iii) basic maps (probability measures and observables { dual maps to random variables) have very different “mathematical nature”. Accordingly, we propose an upgraded probability theory based on Łukasiewicz operations (multivalued logic) on events, elementary category theory, and covering the classical probability theory as a special case. The upgrade can be compared to replacing calculations with integers by calculations with rational (and real) numbers. Namely, to avoid the three objections, we embed the classical (Boolean) random events (represented by the f0; 1g-valued indicator functions of sets) into upgraded random events (represented by measurable {0; 1}-valued functions), the minimal domain of probability containing “fractions” of classical random events, and we upgrade the notions of probability measure and random variable.


Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


2002 ◽  
Vol 34 (3) ◽  
pp. 609-625 ◽  
Author(s):  
N. Papadatos ◽  
V. Papathanasiou

The random variables X1, X2, …, Xn are said to be totally negatively dependent (TND) if and only if the random variables Xi and ∑j≠iXj are negatively quadrant dependent for all i. Our main result provides, for TND 0-1 indicators X1, x2, …, Xn with P[Xi = 1] = pi = 1 - P[Xi = 0], an upper bound for the total variation distance between ∑ni=1Xi and a Poisson random variable with mean λ ≥ ∑ni=1pi. An application to a generalized birthday problem is considered and, moreover, some related results concerning the existence of monotone couplings are discussed.


1983 ◽  
Vol 15 (3) ◽  
pp. 585-600 ◽  
Author(s):  
A. D. Barbour ◽  
G. K. Eagleson

Stein's (1970) method of proving limit theorems for sums of dependent random variables is used to derive Poisson approximations for a class of statistics, constructed from finitely exchangeable random variables.Let be exchangeable random elements of a space and, for I a k-subset of , let XI be a 0–1 function. The statistics studied here are of the form where N is some collection of k -subsets of .An estimate of the total variation distance between the distributions of W and an appropriate Poisson random variable is derived and is used to give conditions sufficient for W to be asymptotically Poisson. Two applications of these results are presented.


2014 ◽  
Vol 15 (3) ◽  
pp. 195-203 ◽  
Author(s):  
Qing Xiao

Abstract This paper employs the generalized lambda distribution (GLD) to model random variables with various probability distributions in power system. In the context of the probability weighted moment (PWM), an optimization-free method is developed to assess the parameters of GLD. By equating the first four PWMs of GLD with those of the target random variable, a polynomial equation with one unknown is derived to solve for the parameters of GLD. When employing GLD to model correlated multivariate random variables, a method of accommodating the dependency is put forward. Finally, three examples are worked to demonstrate the proposed method.


1983 ◽  
Vol 15 (03) ◽  
pp. 585-600 ◽  
Author(s):  
A. D. Barbour ◽  
G. K. Eagleson

Stein's (1970) method of proving limit theorems for sums of dependent random variables is used to derive Poisson approximations for a class of statistics, constructed from finitely exchangeable random variables. Let be exchangeable random elements of a space and, for I a k-subset of , let XI be a 0–1 function. The statistics studied here are of the form where N is some collection of k -subsets of . An estimate of the total variation distance between the distributions of W and an appropriate Poisson random variable is derived and is used to give conditions sufficient for W to be asymptotically Poisson. Two applications of these results are presented.


Author(s):  
Robert H. Swendsen

The chapter presents an overview of various interpretations of probability. It introduces a ‘model probability,’ which assumes that all microscopic states that are essentially alike have the same probability in equilibrium. A justification for this fundamental assumption is provided. The basic definitions used in discrete probability theory are introduced, along with examples of their application. One such example, which illustrates how a random variable is derived from other random variables, demonstrates the use of the Kronecker delta function. The chapter further derives the binomial and multinomial distributions, which will be important in the following chapter on the configurational entropy, along with the useful approximation developed by Stirling and its variations. The Gaussian distribution is presented in detail, as it will be very important throughout the book.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter focuses on probability mass functions. One of the primary uses of Bayesian inference is to estimate parameters. To do so, it is necessary to first build a good understanding of probability distributions. This chapter introduces the idea of a random variable and presents general concepts associated with probability distributions for discrete random variables. It starts off by discussing the concept of a function and goes on to describe how a random variable is a type of function. The binomial distribution and the Bernoulli distribution are then used as examples of the probability mass functions (pmf’s). The pmfs can be used to specify prior distributions, likelihoods, likelihood profiles and/or posterior distributions in Bayesian inference.


Sign in / Sign up

Export Citation Format

Share Document