scholarly journals (Generalized) Maximum Cumulative Direct, Residual, and Paired Φ Entropy Approach

Entropy ◽  
2020 ◽  
Vol 22 (1) ◽  
pp. 91
Author(s):  
Ingo Klein ◽  
Monika Doll

A distribution that maximizes an entropy can be found by applying two different principles. On the one hand, Jaynes (1957a,b) formulated the maximum entropy principle (MaxEnt) as the search for a distribution maximizing a given entropy under some given constraints. On the other hand, Kapur (1994) and Kesavan and Kapur (1989) introduced the generalized maximum entropy principle (GMaxEnt) as the derivation of an entropy for which a given distribution has the maximum entropy property under some given constraints. In this paper, both principles were considered for cumulative entropies. Such entropies depend either on the distribution function (direct), on the survival function (residual) or on both (paired). We incorporate cumulative direct, residual, and paired entropies in one approach called cumulative Φ entropies. Maximizing this entropy without any constraints produces an extremely U-shaped (=bipolar) distribution. Maximizing the cumulative entropy under the constraints of fixed mean and variance tries to transform a distribution in the direction of a bipolar distribution, as far as it is allowed by the constraints. A bipolar distribution represents so-called contradictory information, which is in contrast to minimum or no information. In the literature, to date, only a few maximum entropy distributions for cumulative entropies have been derived. In this paper, we extended the results to well known flexible distributions (like the generalized logistic distribution) and derived some special distributions (like the skewed logistic, the skewed Tukey λ and the extended Burr XII distribution). The generalized maximum entropy principle was applied to the generalized Tukey λ distribution and the Fechner family of skewed distributions. Finally, cumulative entropies were estimated such that the data was drawn from a maximum entropy distribution. This estimator will be applied to the daily S&P500 returns and time durations between mine explosions.


1989 ◽  
Vol 19 (5) ◽  
pp. 1042-1052 ◽  
Author(s):  
H.K. Kesavan ◽  
J.N. Kapur


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Elena Agliari ◽  
Francesco Alemanno ◽  
Adriano Barra ◽  
Orazio Antonio Barra ◽  
Alberto Fachechi ◽  
...  

Abstract In this work we apply statistical mechanics tools to infer cardiac pathologies over a sample of M patients whose heart rate variability has been recorded via 24 h Holter device and that are divided in different classes according to their clinical status (providing a repository of labelled data). Considering the set of inter-beat interval sequences $$\{\mathbf {r}(i) \} = \{ r_1(i), r_2(i), \ldots , \}$$ { r ( i ) } = { r 1 ( i ) , r 2 ( i ) , … , } , with $$i=1,\ldots ,M$$ i = 1 , … , M , we estimate their probability distribution $$P(\mathbf {r})$$ P ( r ) exploiting the maximum entropy principle. By setting constraints on the first and on the second moment we obtain an effective pairwise $$(r_n,r_m)$$ ( r n , r m ) model, whose parameters are shown to depend on the clinical status of the patient. In order to check this framework, we generate synthetic data from our model and we show that their distribution is in excellent agreement with the one obtained from experimental data. Further, our model can be related to a one-dimensional spin-glass with quenched long-range couplings decaying with the spin–spin distance as a power-law. This allows us to speculate that the 1/f noise typical of heart-rate variability may stem from the interplay between the parasympathetic and orthosympathetic systems.



Author(s):  
XIANG LI ◽  
BAODING LIU

The concept of fuzzy entropy is used to provide a quantitative measure of the uncertainty associated with every fuzzy variable. This paper proposes the maximum entropy principle for fuzzy variables, that is, out of all the membership functions satisfying given constraints, choose the one that has maximum entropy. The problem is what is the specific formulation of the maximum entropy membership function. The purpose of this paper is to solve this problem by Euler–Lagrange equation.



2011 ◽  
Author(s):  
Bahruz Gadjiev ◽  
Ali Mohammad-Djafari ◽  
Jean-François Bercher ◽  
Pierre Bessiére


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 911
Author(s):  
Steeve Zozor ◽  
Jean-François Bercher

In this paper, we focus on extended informational measures based on a convex function ϕ: entropies, extended Fisher information, and generalized moments. Both the generalization of the Fisher information and the moments rely on the definition of an escort distribution linked to the (entropic) functional ϕ. We revisit the usual maximum entropy principle—more precisely its inverse problem, starting from the distribution and constraints, which leads to the introduction of state-dependent ϕ-entropies. Then, we examine interrelations between the extended informational measures and generalize relationships such the Cramér–Rao inequality and the de Bruijn identity in this broader context. In this particular framework, the maximum entropy distributions play a central role. Of course, all the results derived in the paper include the usual ones as special cases.





1990 ◽  
Vol 27 (2) ◽  
pp. 303-313 ◽  
Author(s):  
Claudine Robert

The maximum entropy principle is used to model uncertainty by a maximum entropy distribution, subject to some appropriate linear constraints. We give an entropy concentration theorem (whose demonstration is based on large deviation techniques) which is a mathematical justification of this statistical modelling principle. Then we indicate how it can be used in artificial intelligence, and how relevant prior knowledge is provided by some classical descriptive statistical methods. It appears furthermore that the maximum entropy principle yields to a natural binding between descriptive methods and some statistical structures.



Sign in / Sign up

Export Citation Format

Share Document