Discrete Probability Distribution

2021 ◽  
pp. 65-82
Author(s):  
Anindya Ghosh ◽  
Bapi Saha ◽  
Prithwiraj Mal
1970 ◽  
Vol 7 (1) ◽  
pp. 124-133 ◽  
Author(s):  
Pushpa N. Rathie

Let P= {p1,···, PN} be a finite discrete probability distribution. Then the entropy of the distribution P, introduced by Shannon [12], is defined as Throughout this paper ∑ will stand for and logarithms will be taken to the base D (D > 1).


Author(s):  
Federico D’Ambrosio ◽  
Hans L. Bodlaender ◽  
Gerard T. Barkema

AbstractIn this paper, we consider several efficient data structures for the problem of sampling from a dynamically changing discrete probability distribution, where some prior information is known on the distribution of the rates, in particular the maximum and minimum rate, and where the number of possible outcomes N is large. We consider three basic data structures, the Acceptance–Rejection method, the Complete Binary Tree and the Alias method. These can be used as building blocks in a multi-level data structure, where at each of the levels, one of the basic data structures can be used, with the top level selecting a group of events, and the bottom level selecting an element from a group. Depending on assumptions on the distribution of the rates of outcomes, different combinations of the basic structures can be used. We prove that for particular data structures the expected time of sampling and update is constant when the rate distribution follows certain conditions. We show that for any distribution, combining a tree structure with the Acceptance–Rejection method, we have an expected time of sampling and update of $$O\left( \log \log {r_{max}}/{r_{min}}\right) $$ O log log r max / r min is possible, where $$r_{max}$$ r max is the maximum rate and $$r_{min}$$ r min the minimum rate. We also discuss an implementation of a Two Levels Acceptance–Rejection data structure, that allows expected constant time for sampling, and amortized constant time for updates, assuming that $$r_{max}$$ r max and $$r_{min}$$ r min are known and the number of events is sufficiently large. We also present an experimental verification, highlighting the limits given by the constraints of a real-life setting.


Author(s):  
Sicheng Zhao ◽  
Guiguang Ding ◽  
Yue Gao ◽  
Jungong Han

Existing works on image emotion recognition mainly assigned the dominant emotion category or average dimension values to an image based on the assumption that viewers can reach a consensus on the emotion of images. However, the image emotions perceived by viewers are subjective by nature and highly related to the personal and situational factors. On the other hand, image emotions can be conveyed by different features, such as semantics and aesthetics. In this paper, we propose a novel machine learning approach that formulates the categorical image emotions as a discrete probability distribution (DPD). To associate emotions with the extracted visual features, we present a weighted multi-modal shared sparse leaning to learn the combination coefficients, with which the DPD of an unseen image can be predicted by linearly integrating the DPDs of the training images. The representation abilities of different modalities are jointly explored and the optimal weight of each modality is automatically learned. Extensive experiments on three datasets verify the superiority of the proposed method, as compared to the state-of-the-art.


1970 ◽  
Vol 7 (01) ◽  
pp. 124-133
Author(s):  
Pushpa N. Rathie

Let P= {p 1,···, PN } be a finite discrete probability distribution. Then the entropy of the distribution P, introduced by Shannon [12], is defined as Throughout this paper ∑ will stand for and logarithms will be taken to the base D (D > 1).


Sign in / Sign up

Export Citation Format

Share Document