Analysis of Finite Buffer Queue: Maximum Entropy Probability Distribution With Shifted Fractional Geometric and Arithmetic Means

2015 ◽  
Vol 19 (2) ◽  
pp. 163-166 ◽  
Author(s):  
Amit Kumar Singh ◽  
Harendra Pratap Singh ◽  
Karmeshu
1984 ◽  
Vol R-33 (4) ◽  
pp. 353-357 ◽  
Author(s):  
James E. Miller ◽  
Richard W. Kulp ◽  
George E. Orr

2012 ◽  
Vol 2012 ◽  
pp. 1-17 ◽  
Author(s):  
Andrzej Chydzinski ◽  
Blazej Adamczyk

We present an analysis of the number of losses, caused by the buffer overflows, in a finite-buffer queue with batch arrivals and autocorrelated interarrival times. Using the batch Markovian arrival process, the formulas for the average number of losses in a finite time interval and the stationary loss ratio are shown. In addition, several numerical examples are presented, including illustrations of the dependence of the number of losses on the average batch size, buffer size, system load, autocorrelation structure, and time.


Author(s):  
MICHAEL J. MARKHAM

In an expert system having a consistent set of linear constraints it is known that the Method of Tribus may be used to determine a probability distribution which exhibits maximised entropy. The method is extended here to include independence constraints (Accommodation). The paper proceeds to discusses this extension, and its limitations, then goes on to advance a technique for determining a small set of independencies which can be added to the linear constraints required in a particular representation of an expert system called a causal network, so that the Maximum Entropy and Causal Networks methodologies give matching distributions (Emulation). This technique may also be applied in cases where no initial independencies are given and the linear constraints are incomplete, in order to provide an optimal ME fill-in for the missing information.


2021 ◽  
Vol 118 (40) ◽  
pp. e2025782118
Author(s):  
Wei-Chia Chen ◽  
Juannan Zhou ◽  
Jason M. Sheltzer ◽  
Justin B. Kinney ◽  
David M. McCandlish

Density estimation in sequence space is a fundamental problem in machine learning that is also of great importance in computational biology. Due to the discrete nature and large dimensionality of sequence space, how best to estimate such probability distributions from a sample of observed sequences remains unclear. One common strategy for addressing this problem is to estimate the probability distribution using maximum entropy (i.e., calculating point estimates for some set of correlations based on the observed sequences and predicting the probability distribution that is as uniform as possible while still matching these point estimates). Building on recent advances in Bayesian field-theoretic density estimation, we present a generalization of this maximum entropy approach that provides greater expressivity in regions of sequence space where data are plentiful while still maintaining a conservative maximum entropy character in regions of sequence space where data are sparse or absent. In particular, we define a family of priors for probability distributions over sequence space with a single hyperparameter that controls the expected magnitude of higher-order correlations. This family of priors then results in a corresponding one-dimensional family of maximum a posteriori estimates that interpolate smoothly between the maximum entropy estimate and the observed sample frequencies. To demonstrate the power of this method, we use it to explore the high-dimensional geometry of the distribution of 5′ splice sites found in the human genome and to understand patterns of chromosomal abnormalities across human cancers.


2020 ◽  
Author(s):  
Wei-Chia Chen ◽  
Juannan Zhou ◽  
Jason M Sheltzer ◽  
Justin B Kinney ◽  
David M McCandlish

AbstractDensity estimation in sequence space is a fundamental problem in machine learning that is of great importance in computational biology. Due to the discrete nature and large dimensionality of sequence space, how best to estimate such probability distributions from a sample of observed sequences remains unclear. One common strategy for addressing this problem is to estimate the probability distribution using maximum entropy, i.e. calculating point estimates for some set of correlations based on the observed sequences and predicting the probability distribution that is as uniform as possible while still matching these point estimates. Building on recent advances in Bayesian field-theoretic density estimation, we present a generalization of this maximum entropy approach that provides greater expressivity in regions of sequence space where data is plentiful while still maintaining a conservative maximum entropy char-acter in regions of sequence space where data is sparse or absent. In particular, we define a family of priors for probability distributions over sequence space with a single hyper-parameter that controls the expected magnitude of higher-order correlations. This family of priors then results in a corresponding one-dimensional family of maximum a posteriori estimates that interpolate smoothly between the maximum entropy estimate and the observed sample frequencies. To demonstrate the power of this method, we use it to explore the high-dimensional geometry of the distribution of 5′ splice sites found in the human genome and to understand the accumulation of chromosomal abnormalities during cancer progression.


2012 ◽  
Vol 46 (3) ◽  
pp. 189-209
Author(s):  
Medhi Pallabi ◽  
Amit Choudhury

Sign in / Sign up

Export Citation Format

Share Document