scholarly journals A note on negative λ-binomial distribution

2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Yuankui Ma ◽  
Taekyun Kim

Abstract In this paper, we introduce one discrete random variable, namely the negative λ-binomial random variable. We deduce the expectation of the negative λ-binomial random variable. We also get the variance and explicit expression for the moments of the negative λ-binomial random variable.

Filomat ◽  
2020 ◽  
Vol 34 (2) ◽  
pp. 543-549
Author(s):  
Buket Simsek

The aim of this present paper is to establish and study generating function associated with a characteristic function for the Bernstein polynomials. By this function, we derive many identities, relations and formulas relevant to moments of discrete random variable for the Bernstein polynomials (binomial distribution), Bernoulli numbers of negative order, Euler numbers of negative order and the Stirling numbers.


2020 ◽  
Vol 1 (1) ◽  
pp. 79-95
Author(s):  
Indra Malakar

This paper investigates into theoretical knowledge on probability distribution and the application of binomial, poisson and normal distribution. Binomial distribution is widely used discrete random variable when the trails are repeated under identical condition for fixed number of times and when there are only two possible outcomes whereas poisson distribution is for discrete random variable for which the probability of occurrence of an event is small and the total number of possible cases is very large and normal distribution is limiting form of binomial distribution and used when the number of cases is infinitely large and probabilities of success and failure is almost equal.


Author(s):  
Carsten Wiuf ◽  
Michael P.H Stumpf

In this paper, we discuss statistical families with the property that if the distribution of a random variable X is in , then so is the distribution of Z ∼Bi( X ,  p ) for 0≤ p ≤1. (Here we take Z ∼Bi( X ,  p ) to mean that given X = x ,  Z is a draw from the binomial distribution Bi( x ,  p ).) It is said that the family is closed under binomial subsampling. We characterize such families in terms of probability generating functions and for families with finite moments of all orders we give a necessary and sufficient condition for the family to be closed under binomial subsampling. The results are illustrated with power series and other examples, and related to examples from mathematical biology. Finally, some issues concerning inference are discussed.


Author(s):  
Lacramioara Balan ◽  
Rajesh Paleti

Traditional crash databases that record police-reported injury severity data are prone to misclassification errors. Ignoring these errors in discrete ordered response models used for analyzing injury severity can lead to biased and inconsistent parameter estimates. In this study, a mixed generalized ordered response (MGOR) model that quantifies misclassification rates in the injury severity variable and adjusts the bias in parameter estimates associated with misclassification was developed. The proposed model does this by considering the observed injury severity outcome as a realization from a discrete random variable that depends on true latent injury severity that is unobservable to the analyst. The model was used to analyze misclassification rates in police-reported injury severity in the 2014 General Estimates System (GES) data. The model found that only 68.23% and 62.75% of possible and non-incapacitating injuries were correctly recorded in the GES data. Moreover, comparative analysis with the MGOR model that ignores misclassification not only has lower data fit but also considerable bias in both the parameter and elasticity estimates. The model developed in this study can be used to analyze misclassification errors in ordinal response variables in other empirical contexts.


2020 ◽  
Vol 32 (5) ◽  
pp. 1018-1032 ◽  
Author(s):  
Noah Frazier-Logue ◽  
Stephen José Hanson

Multilayer neural networks have led to remarkable performance on many kinds of benchmark tasks in text, speech, and image processing. Nonlinear parameter estimation in hierarchical models is known to be subject to overfitting and misspecification. One approach to these estimation and related problems (e.g., saddle points, colinearity, feature discovery) is called Dropout. The Dropout algorithm removes hidden units according to a binomial random variable with probability [Formula: see text] prior to each update, creating random “shocks” to the network that are averaged over updates (thus creating weight sharing). In this letter, we reestablish an older parameter search method and show that Dropout is a special case of this more general model, stochastic delta rule (SDR), published originally in 1990. Unlike Dropout, SDR redefines each weight in the network as a random variable with mean [Formula: see text] and standard deviation [Formula: see text]. Each weight random variable is sampled on each forward activation, consequently creating an exponential number of potential networks with shared weights (accumulated in the mean values). Both parameters are updated according to prediction error, thus resulting in weight noise injections that reflect a local history of prediction error and local model averaging. SDR therefore implements a more sensitive local gradient-dependent simulated annealing per weight converging in the limit to a Bayes optimal network. We run tests on standard benchmarks (CIFAR and ImageNet) using a modified version of DenseNet and show that SDR outperforms standard Dropout in top-5 validation error by approximately 13% with DenseNet-BC 121 on ImageNet and find various validation error improvements in smaller networks. We also show that SDR reaches the same accuracy that Dropout attains in 100 epochs in as few as 40 epochs, as well as improvements in training error by as much as 80%.


1973 ◽  
Vol 11 (3) ◽  
pp. 362-364 ◽  
Author(s):  
P. A. Parker ◽  
R. N. Scott

2018 ◽  
Vol 10 (03) ◽  
pp. 1850030
Author(s):  
N. K. Sudev ◽  
K. P. Chithra ◽  
K. A. Germina ◽  
S. Satheesh ◽  
Johan Kok

Coloring the vertices of a graph [Formula: see text] according to certain conditions can be considered as a random experiment and a discrete random variable [Formula: see text] can be defined as the number of vertices having a particular color in the proper coloring of [Formula: see text]. The concepts of mean and variance, two important statistical measures, have also been introduced to the theory of graph coloring and determined the values of these parameters for a number of standard graphs. In this paper, we discuss the coloring parameters of the Mycielskian of certain standard graphs.


2020 ◽  
Vol 07 (01) ◽  
pp. 2050009
Author(s):  
Francesco Strati ◽  
Luca G. Trussoni

In this paper, we shall propose a Monte Carlo simulation technique applied to a G2++ model: even when the number of simulated paths is small, our technique allows to find a precise simulated deflator. In particular, we shall study the transition law of the discrete random variable :[Formula: see text] in the time span [Formula: see text] conditional on the observation at time [Formula: see text], and we apply it in a recursive way to build the different paths of the simulation. We shall apply the proposed technique to the insurance industry, and in particular to the issue of pricing insurance contracts with embedded options and guarantees.


Sign in / Sign up

Export Citation Format

Share Document