scholarly journals A LINK BETWEEN THE MAXIMUM ENTROPY APPROACH AND THE VARIATIONAL ENTROPY FORM

2011 ◽  
Vol 25 (22) ◽  
pp. 1821-1828 ◽  
Author(s):  
E. V. VAKARIN ◽  
J. P. BADIALI

The maximum entropy approach operating with quite general entropy measure and constraint is considered. It is demonstrated that for a conditional or parametrized probability distribution f(x|μ), there is a "universal" relation among the entropy rate and the functions appearing in the constraint. This relation allows one to translate the specificities of the observed behavior θ(μ) into the amount of information on the relevant random variable x at different values of the parameter μ. It is shown that the recently proposed variational formulation of the entropic functional can be obtained as a consequence of this relation, that is from the maximum entropy principle. This resolves certain puzzling points that appeared in the variational approach.

1980 ◽  
Vol 102 (3) ◽  
pp. 460-468
Author(s):  
J. N. Siddall ◽  
Ali Badawy

A new algorithm using the maximum entropy principle is introduced to estimate the probability distribution of a random variable, using directly a ranked sample. It is demonstrated that almost all of the analytical probability distributions can be approximated by the new algorithm. A comparison is made between existing methods and the new algorithm; and examples are given of fitting the new distribution to an actual ranked sample.


Entropy ◽  
2016 ◽  
Vol 18 (4) ◽  
pp. 111 ◽  
Author(s):  
Hongshuang Li ◽  
Debing Wen ◽  
Zizi Lu ◽  
Yu Wang ◽  
Feng Deng

Author(s):  
Amos Golan

In this chapter I present the key ideas and develop the essential quantitative metrics needed for modeling and inference with limited information. I provide the necessary tools to study the traditional maximum-entropy principle, which is the cornerstone for info-metrics. The chapter starts by defining the primary notions of information and entropy as they are related to probabilities and uncertainty. The unique properties of the entropy are explained. The derivations and discussion are extended to multivariable entropies and informational quantities. For completeness, I also discuss the complete list of the Shannon-Khinchin axioms behind the entropy measure. An additional derivation of information and entropy, due to the independently developed work of Wiener, is provided as well.


1990 ◽  
Vol 27 (2) ◽  
pp. 303-313 ◽  
Author(s):  
Claudine Robert

The maximum entropy principle is used to model uncertainty by a maximum entropy distribution, subject to some appropriate linear constraints. We give an entropy concentration theorem (whose demonstration is based on large deviation techniques) which is a mathematical justification of this statistical modelling principle. Then we indicate how it can be used in artificial intelligence, and how relevant prior knowledge is provided by some classical descriptive statistical methods. It appears furthermore that the maximum entropy principle yields to a natural binding between descriptive methods and some statistical structures.


Sign in / Sign up

Export Citation Format

Share Document