scholarly journals Inferences from a network to a subnetwork and vice versa under an assumption of symmetry

2015 ◽  
Author(s):  
PierGianLuca Porta Mana ◽  
Emiliano Torre ◽  
Vahid Rostami

This note summarizes some mathematical relations between the probability distributions for the states of a network of binary units and a subnetwork thereof, under an assumption of symmetry. These relations are standard results of probability theory, but seem to be rarely used in neuroscience. Some of their consequences for inferences between network and subnetwork, especially in connection with the maximum-entropy principle, are briefly discussed. The meanings and applicability of the assumption of symmetry are also discussed.

1980 ◽  
Vol 102 (3) ◽  
pp. 460-468
Author(s):  
J. N. Siddall ◽  
Ali Badawy

A new algorithm using the maximum entropy principle is introduced to estimate the probability distribution of a random variable, using directly a ranked sample. It is demonstrated that almost all of the analytical probability distributions can be approximated by the new algorithm. A comparison is made between existing methods and the new algorithm; and examples are given of fitting the new distribution to an actual ranked sample.


Entropy ◽  
2018 ◽  
Vol 20 (9) ◽  
pp. 696 ◽  
Author(s):  
Sergio Davis ◽  
Diego González ◽  
Gonzalo Gutiérrez

A general framework for inference in dynamical systems is described, based on the language of Bayesian probability theory and making use of the maximum entropy principle. Taking the concept of a path as fundamental, the continuity equation and Cauchy’s equation for fluid dynamics arise naturally, while the specific information about the system can be included using the maximum caliber (or maximum path entropy) principle.


Author(s):  
Sergio Davis ◽  
Diego González ◽  
Gonzalo Gutiérrez

A general framework for inference in dynamical systems is described, based on the language of Bayesian probability theory and making use of the maximum entropy principle. Taking as fundamental the concept of a path, the continuity equation and Cauchy's equation for fluid dynamics arise naturally, while the specific information about the system can be included using the Maximum Caliber (or maximum path entropy) principle.


1990 ◽  
Vol 27 (2) ◽  
pp. 303-313 ◽  
Author(s):  
Claudine Robert

The maximum entropy principle is used to model uncertainty by a maximum entropy distribution, subject to some appropriate linear constraints. We give an entropy concentration theorem (whose demonstration is based on large deviation techniques) which is a mathematical justification of this statistical modelling principle. Then we indicate how it can be used in artificial intelligence, and how relevant prior knowledge is provided by some classical descriptive statistical methods. It appears furthermore that the maximum entropy principle yields to a natural binding between descriptive methods and some statistical structures.


Author(s):  
KAI YAO ◽  
JINWU GAO ◽  
WEI DAI

Entropy is a measure of the uncertainty associated with a variable whose value cannot be exactly predicated. In uncertainty theory, it has been quantified so far by logarithmic entropy. However, logarithmic entropy sometimes fails to measure the uncertainty. This paper will propose another type of entropy named sine entropy as a supplement, and explore its properties. After that, the maximum entropy principle will be introduced, and the arc-cosine distributed variables will be proved to have the maximum sine entropy with given expected value and variance.


Sign in / Sign up

Export Citation Format

Share Document