Modeling Uncertainties in Power System by Generalized Lambda Distribution

2014 ◽  
Vol 15 (3) ◽  
pp. 195-203 ◽  
Author(s):  
Qing Xiao

Abstract This paper employs the generalized lambda distribution (GLD) to model random variables with various probability distributions in power system. In the context of the probability weighted moment (PWM), an optimization-free method is developed to assess the parameters of GLD. By equating the first four PWMs of GLD with those of the target random variable, a polynomial equation with one unknown is derived to solve for the parameters of GLD. When employing GLD to model correlated multivariate random variables, a method of accommodating the dependency is put forward. Finally, three examples are worked to demonstrate the proposed method.

Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


Author(s):  
Therese M. Donovan ◽  
Ruth M. Mickey

This chapter focuses on probability mass functions. One of the primary uses of Bayesian inference is to estimate parameters. To do so, it is necessary to first build a good understanding of probability distributions. This chapter introduces the idea of a random variable and presents general concepts associated with probability distributions for discrete random variables. It starts off by discussing the concept of a function and goes on to describe how a random variable is a type of function. The binomial distribution and the Bernoulli distribution are then used as examples of the probability mass functions (pmf’s). The pmfs can be used to specify prior distributions, likelihoods, likelihood profiles and/or posterior distributions in Bayesian inference.


Author(s):  
M. Vidyasagar

This chapter provides an introduction to probability and random variables. Probability theory is an attempt to formalize the notion of uncertainty in the outcome of an experiment. For instance, suppose an urn contains four balls, colored red, blue, white, and green respectively. Suppose we dip our hand in the urn and pull out one of the balls “at random.” What is the likelihood that the ball we pull out will be red? The chapter first defines a random variable and probability before discussing the function of a random variable and expected value. It then considers total variation distance, joint and marginal probability distributions, independence and conditional probability distributions, Bayes' rule, and maximum likelihood estimates. Finally, it describes random variables assuming infinitely many values, focusing on Markov and Chebycheff inequalities, Hoeffding's inequality, Monte Carlo simulation, and Cramér's theorem.


Author(s):  
Munteanu Bogdan Gheorghe

Based on the Weibull-G Power probability distribution family, we have proposed a new family of probability distributions, named by us the Max Weibull-G power series distributions, which may be applied in order to solve some reliability problems. This implies the fact that the Max Weibull-G power series is the distribution of a random variable max (X1 ,X2 ,...XN) where X1 ,X2 ,... are Weibull-G distributed independent random variables and N is a natural random variable the distribution of which belongs to the family of power series distribution. The main characteristics and properties of this distribution are analyzed.


2020 ◽  
pp. 49-54
Author(s):  
Marcin Lawnik ◽  
Arkadiusz Banasik ◽  
Adrian Kapczyński

The values of random variables are commonly used in the field of artificial intelligence. The literature shows plenty of methods, which allows us to generate them, for example, inverse cumulative density function method. Some of the ways are based on chaotic projection. The chaotic methods of generating random variables are concerned with mainly continuous random variables. This article presents the method of generating values from discrete probability distributions with the use of properly constructed piece-wise linear chaotic map. This method is based on a properly constructed discrete dynamical system with chaotic behavior. Successive probability values cover the unit interval and the corresponding random variable values are assigned to the determined subintervals. In the next step, a piece-wise linear map on the subintervals is constructed. In the course of iterations of the chaotic map, consecutive values from a given discrete distribution are derived. The method is presented on the example of Bernoulli distribution. Furthermore, an analysis of the discussed example is conducted and shows that the presented method is the fastest of all analyzed methods.


1965 ◽  
Vol 8 (6) ◽  
pp. 819-824 ◽  
Author(s):  
V. Seshadri

The motivation for this paper lies in the following remarkable property of certain probability distributions. The distribution law of the r. v. (random variable) X is exactly the same as that of 1/ X, and in the case of a r. v. with p. d. f. (probability density function) f(x; a, b) where a, b are parameters, the p. d. f. of 1/X is f(x; b, a). In the latter case the p. d. f. of the reciprocal is obtained from the p. d. f. of X by merely switching the parameters. The existence of random variables with this property is perhaps familiar to statisticians, as is evidenced by the use of the classical 'F' distribution. The Cauchy law is yet another example which illustrates this property. It seems, therefore, reasonable to characterize this class of random variables by means of this rather interesting property.


1965 ◽  
Vol 8 (1) ◽  
pp. 93-103 ◽  
Author(s):  
Miklós Csörgo

Let F(x) be the continuous distribution function of a random variable X and Fn(x) be the empirical distribution function determined by a random sample X1, …, Xn taken on X. Using the method of Birnbaum and Tingey [1] we are going to derive the exact distributions of the random variablesand and where the indicated sup' s are taken over all x' s such that -∞ < x < xb and xa ≤ x < + ∞ with F(xb) = b, F(xa) = a in the first two cases and over all x' s so that Fn(x) ≤ b and a ≤ Fn(x) in the last two cases.


Author(s):  
Abraham Nitzan

This chapter reviews some subjects in mathematics and physics that are used in different contexts throughout this book. The selection of subjects and the level of their coverage reflect the author’s perception of what potential users of this text were exposed to in their earlier studies. Therefore, only brief overview is given of some subjects while somewhat more comprehensive discussion is given of others. In neither case can the coverage provided substitute for the actual learning of these subjects that are covered in detail by many textbooks. A random variable is an observable whose repeated determination yields a series of numerical values (“realizations” of the random variable) that vary from trial to trial in a way characteristic of the observable. The outcomes of tossing a coin or throwing a die are familiar examples of discrete random variables. The position of a dust particle in air and the lifetime of a light bulb are continuous random variables. Discrete random variables are characterized by probability distributions; Pn denotes the probability that a realization of the given random variable is n. Continuous random variables are associated with probability density functions P(x): P(x1)dx denotes the probability that the realization of the variable x will be in the interval x1 . . . x1+dx.


Mathematics ◽  
2021 ◽  
Vol 9 (9) ◽  
pp. 981
Author(s):  
Patricia Ortega-Jiménez ◽  
Miguel A. Sordo ◽  
Alfonso Suárez-Llorens

The aim of this paper is twofold. First, we show that the expectation of the absolute value of the difference between two copies, not necessarily independent, of a random variable is a measure of its variability in the sense of Bickel and Lehmann (1979). Moreover, if the two copies are negatively dependent through stochastic ordering, this measure is subadditive. The second purpose of this paper is to provide sufficient conditions for comparing several distances between pairs of random variables (with possibly different distribution functions) in terms of various stochastic orderings. Applications in actuarial and financial risk management are given.


Sign in / Sign up

Export Citation Format

Share Document