scholarly journals On Negative Binomial Approximation to k-Runs

2008 ◽  
Vol 45 (02) ◽  
pp. 456-471 ◽  
Author(s):  
Xiaoxin Wang ◽  
Aihua Xia

The distributions of the run occurrences for a sequence of independent and identically distributed (i.i.d.) experiments are usually obtained by combinatorial methods (see Balakrishnan and Koutras (2002, Chapter 5)) and the resulting formulae are often very tedious, while the distributions for non i.i.d. experiments are generally intractable. It is therefore of practical interest to find a suitable approximate model with reasonable approximation accuracy. In this paper we demonstrate that the negative binomial distribution is the most suitable approximate model for the number ofk-runs: it outperforms the Poisson approximation, the general compound Poisson approximation as observed in Eichelsbacher and Roos (1999), and the translated Poisson approximation in Rollin (2005). In particular, its accuracy of approximation in terms of the total variation distance improves when the number of experiments increases, in the same way as the normal approximation improves in the Berry-Esseen theorem.

2008 ◽  
Vol 45 (2) ◽  
pp. 456-471 ◽  
Author(s):  
Xiaoxin Wang ◽  
Aihua Xia

The distributions of the run occurrences for a sequence of independent and identically distributed (i.i.d.) experiments are usually obtained by combinatorial methods (see Balakrishnan and Koutras (2002, Chapter 5)) and the resulting formulae are often very tedious, while the distributions for non i.i.d. experiments are generally intractable. It is therefore of practical interest to find a suitable approximate model with reasonable approximation accuracy. In this paper we demonstrate that the negative binomial distribution is the most suitable approximate model for the number of k-runs: it outperforms the Poisson approximation, the general compound Poisson approximation as observed in Eichelsbacher and Roos (1999), and the translated Poisson approximation in Rollin (2005). In particular, its accuracy of approximation in terms of the total variation distance improves when the number of experiments increases, in the same way as the normal approximation improves in the Berry-Esseen theorem.


1998 ◽  
Vol 30 (02) ◽  
pp. 449-475 ◽  
Author(s):  
A. D. Barbour ◽  
Sergey Utev

The accuracy of compound Poisson approximation can be estimated using Stein's method in terms of quantities similar to those which must be calculated for Poisson approximation. However, the solutions of the relevant Stein equation may, in general, grow exponentially fast with the mean number of ‘clumps’, leading to many applications in which the bounds are of little use. In this paper, we introduce a method for circumventing this difficulty. We establish good bounds for those solutions of the Stein equation which are needed to measure the accuracy of approximation with respect to Kolmogorov distance, but only in a restricted range of the argument. The restriction on the range is then compensated by a truncation argument. Examples are given to show that the method clearly outperforms its competitors, as soon as the mean number of clumps is even moderately large.


2000 ◽  
Vol 9 (6) ◽  
pp. 529-548 ◽  
Author(s):  
MARIANNE MÅNSSON

Consider sequences {Xi}mi=1 and {Yj}nj=1 of independent random variables, taking values in a finite alphabet, and assume that the variables X1, X2, … and Y1, Y2, … follow the distributions μ and v, respectively. Two variables Xi and Yj are said to match if Xi = Yj. Let the number of matching subsequences of length k between the two sequences, when r, 0 [les ] r < k, mismatches are allowed, be denoted by W.In this paper we use Stein's method to bound the total variation distance between the distribution of W and a suitably chosen compound Poisson distribution. To derive rates of convergence, the case where E[W] stays bounded away from infinity, and the case where E[W] → ∞ as m, n → ∞, have to be treated separately. Under the assumption that ln n/ln(mn) → ρ ∈ (0, 1), we give conditions on the rate at which k → ∞, and on the distributions μ and v, for which the variation distance tends to zero.


2013 ◽  
Vol 427-429 ◽  
pp. 2549-2553 ◽  
Author(s):  
Dong Ping Hu ◽  
Yong Quan Cui ◽  
Ai Hua Yin

This paper gives an improved negative binomial approximation for negative hypergeometric probability. Some numerical examples are presented to illustrate that in most practical cases the effect of our approximation is almost uniformly better than the negative binomial approximation.


Sign in / Sign up

Export Citation Format

Share Document