Normal Approximation to a Binomial Random Variable

2005 ◽  
Vol 2005 (5) ◽  
pp. 717-728 ◽  
Author(s):  
K. Neammanee

LetX1,X2,…,Xnbe independent Bernoulli random variables withP(Xj=1)=1−P(Xj=0)=pjand letSn:=X1+X2+⋯+Xn.Snis called a Poisson binomial random variable and it is well known that the distribution of a Poisson binomial random variable can be approximated by the standard normal distribution. In this paper, we use Taylor's formula to improve the approximation by adding some correction terms. Our result is better than before and is of order1/nin the casep1=p2=⋯=pn.


2021 ◽  
Vol 73 (1) ◽  
pp. 62-67
Author(s):  
Ibrahim A. Ahmad ◽  
A. R. Mugdadi

For a sequence of independent, identically distributed random variable (iid rv's) [Formula: see text] and a sequence of integer-valued random variables [Formula: see text], define the random quantiles as [Formula: see text], where [Formula: see text] denote the largest integer less than or equal to [Formula: see text], and [Formula: see text] the [Formula: see text]th order statistic in a sample [Formula: see text] and [Formula: see text]. In this note, the limiting distribution and its exact order approximation are obtained for [Formula: see text]. The limiting distribution result we obtain extends the work of several including Wretman[Formula: see text]. The exact order of normal approximation generalizes the fixed sample size results of Reiss[Formula: see text]. AMS 2000 subject classification: 60F12; 60F05; 62G30.


2020 ◽  
Vol 32 (5) ◽  
pp. 1018-1032 ◽  
Author(s):  
Noah Frazier-Logue ◽  
Stephen José Hanson

Multilayer neural networks have led to remarkable performance on many kinds of benchmark tasks in text, speech, and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting and misspecification. One approach to these estimation and related problems (e.g., saddle points, colinearity, feature discovery) is called Dropout. The Dropout algorithm removes hidden units according to a binomial random variable with probability [Formula: see text] prior to each update, creating random “shocks” to the network that are averaged over updates (thus creating weight sharing). In this letter, we reestablish an older parameter search method and show that Dropout is a special case of this more general model, stochastic delta rule (SDR), published originally in 1990. Unlike Dropout, SDR redefines each weight in the network as a random variable with mean [Formula: see text] and standard deviation [Formula: see text]. Each weight random variable is sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights (accumulated in the mean values). Both parameters are updated according to prediction error, thus resulting in weight noise injections that reflect a local history of prediction error and local model averaging. SDR therefore implements a more sensitive local gradient-dependent simulated annealing per weight converging in the limit to a Bayes optimal network. We run tests on standard benchmarks (CIFAR and ImageNet) using a modified version of DenseNet and show that SDR outperforms standard Dropout in top-5 validation error by approximately 13% with DenseNet-BC 121 on ImageNet and find various validation error improvements in smaller networks. We also show that SDR reaches the same accuracy that Dropout attains in 100 epochs in as few as 40 epochs, as well as improvements in training error by as much as 80%.


2019 ◽  
Vol 2019 ◽  
pp. 1-10
Author(s):  
Nahathai Rerkruthairat

The Berry-Esseen bound for the random variable based on the sum of squared sample correlation coefficients and used to test the complete independence in high diemensions is shown by Stein’s method. Although the Berry-Esseen bound can be applied to all real numbers in R, a nonuniform bound at a real number z usually provides a sharper bound if z is fixed. In this paper, we present the first version of a nonuniform bound on a normal approximation for this random variable with an optimal rate of 1/0.5+|z|·O1/m by using Stein’s method.


2010 ◽  
Vol 51 ◽  
Author(s):  
Aurelija Kasparavičiūtė ◽  
Leonas Saulis

In this paper, we present the rate of convergence of normal approximation and the theorem on large deviations for a compound process Zt = \sumNt i=1 t aiXi, where Z0 = 0 and ai > 0, of weighted independent identically distributed random variables Xi, i = 1, 2, . . . with  mean EXi = µ and variance DXi = σ2 > 0. It is assumed that Nt is a non-negative integervalued random variable, which depends on t > 0 and is independent of Xi, i = 1, 2, . . . .


2021 ◽  
Vol 47 ◽  
Author(s):  
Leonas Saulis ◽  
Dovilė Deltuvienė

Normal aproximationof sum Zt =ΣNti=1Xi of i.i.d. random variables (r.v.) Xi , i = 1, 2, . . . with mean EXi = μ and variance DXi = σ2 > 0 is analyzed taking into consideration large deviations. Here Nt is non-negative integer-valued random variable, which depends on t , but not depends at Xi , i = 1, 2, . . ..


1998 ◽  
Vol 35 (3) ◽  
pp. 589-599
Author(s):  
William L. Cooper

Given a sequence of random variables (rewards), the Haviv–Puterman differential equation relates the expected infinite-horizon λ-discounted reward and the expected total reward up to a random time that is determined by an independent negative binomial random variable with parameters 2 and λ. This paper provides an interpretation of this proven, but previously unexplained, result. Furthermore, the interpretation is formalized into a new proof, which then yields new results for the general case where the rewards are accumulated up to a time determined by an independent negative binomial random variable with parameters k and λ.


Sign in / Sign up

Export Citation Format

Share Document