principle of maximum entropy
Recently Published Documents


TOTAL DOCUMENTS

174
(FIVE YEARS 35)

H-INDEX

23
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Alireza Beygi ◽  
Haralampos Hatzikirou

By applying the principle of maximum entropy, we demonstrate the universality of the spatial distributions of the cone photoreceptors in the retinas of vertebrates. We obtain Lemaître's law as a special case of our formalism.


Entropy ◽  
2021 ◽  
Vol 23 (12) ◽  
pp. 1615
Author(s):  
Pedro Henrique Lima Alencar ◽  
Eva Nora Paton ◽  
José Carlos de Araújo

Many regions around the globe are subjected to precipitation-data scarcity that often hinders the capacity of hydrological modeling. The entropy theory and the principle of maximum entropy can help hydrologists to extract useful information from the scarce data available. In this work, we propose a new method to assess sub-daily precipitation features such as duration and intensity based on daily precipitation using the principle of maximum entropy. Particularly in arid and semiarid regions, such sub-daily features are of central importance for modeling sediment transport and deposition. The obtained features were used as input to the SYPoME model (sediment yield using the principle of maximum entropy). The combined method was implemented in seven catchments in Northeast Brazil with drainage areas ranging from 10−3 to 10+2 km2 in assessing sediment yield and delivery ratio. The results show significant improvement when compared with conventional deterministic modeling, with Nash–Sutcliffe efficiency (NSE) of 0.96 and absolute error of 21% for our method against NSE of −4.49 and absolute error of 105% for the deterministic approach.


2021 ◽  
Vol 3 (1) ◽  
pp. 2
Author(s):  
Marnix Van Soom ◽  
Bart de Boer

We derive a weakly informative prior for a set of ordered resonance frequencies from Jaynes’ principle of maximum entropy. The prior facilitates model selection problems in which both the number and the values of the resonance frequencies are unknown. It encodes a weakly inductive bias, provides a reasonable density everywhere, is easily parametrizable, and is easy to sample. We hope that this prior can enable the use of robust evidence-based methods for a new class of problems, even in the presence of multiplets of arbitrary order.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1356
Author(s):  
Demetris Koutsoyiannis ◽  
G.-Fivos Sargentis

While entropy was introduced in the second half of the 19th century in the international vocabulary as a scientific term, in the 20th century it became common in colloquial use. Popular imagination has loaded “entropy” with almost every negative quality in the universe, in life and in society, with a dominant meaning of disorder and disorganization. Exploring the history of the term and many different approaches to it, we show that entropy has a universal stochastic definition, which is not disorder. Hence, we contend that entropy should be used as a mathematical (stochastic) concept as rigorously as possible, free of metaphoric meanings. The accompanying principle of maximum entropy, which lies behind the Second Law, gives explanatory and inferential power to the concept, and promotes entropy as the mother of creativity and evolution. As the social sciences are often contaminated by subjectivity and ideological influences, we try to explore whether maximum entropy, applied to the distribution of a wealth-related variable, namely annual income, can give an objective description. Using publicly available income data, we show that income distribution is consistent with the principle of maximum entropy. The increase in entropy is associated to increases in society’s wealth, yet a standardized form of entropy can be used to quantify inequality. Historically, technology has played a major role in the development of and increase in the entropy of income. Such findings are contrary to the theory of ecological economics and other theories that use the term entropy in a Malthusian perspective.


Philosophies ◽  
2021 ◽  
Vol 6 (3) ◽  
pp. 57
Author(s):  
Antony Lesage ◽  
Jean-Marc Victor

Is it possible to measure the dispersion of ex ante chances (i.e., chances “before the event”) among people, be it gambling, health, or social opportunities? We explore this question and provide some tools, including a statistical test, to evidence the actual dispersion of ex ante chances in various areas, with a focus on chronic diseases. Using the principle of maximum entropy, we derive the distribution of the risk of becoming ill in the global population as well as in the population of affected people. We find that affected people are either at very low risk, like the overwhelming majority of the population, but still were unlucky to become ill, or are at extremely high risk and were bound to become ill.


2021 ◽  
Vol 5 (2) ◽  
pp. 26
Author(s):  
Vicente José Bevia ◽  
Clara Burgos Simón ◽  
Juan Carlos Cortés ◽  
Rafael J. Villanueva Micó

The Baranyi–Roberts model describes the dynamics of the volumetric densities of two interacting cell populations. We randomize this model by considering that the initial conditions are random variables whose distributions are determined by using sample data and the principle of maximum entropy. Subsequenly, we obtain the Liouville–Gibbs partial differential equation for the probability density function of the two-dimensional solution stochastic process. Because the exact solution of this equation is unaffordable, we use a finite volume scheme to numerically approximate the aforementioned probability density function. From this key information, we design an optimization procedure in order to determine the best growth rates of the Baranyi–Roberts model, so that the expectation of the numerical solution is as close as possible to the sample data. The results evidence good fitting that allows for performing reliable predictions.


2021 ◽  
Author(s):  
Amit Singh ◽  
Sagar Chavan

<p>The Kappa distribution is a versatile distribution and results in nine different distributions depending on its parameter values.The study presents an entropy-based method for estimating the parameters of the four parameters kappa distribution. At site data of the annual maximum flood of 30 sites of Krishna river basins are used for the study. The parameters estimated using the principle of maximum entropy (POME), method of moments, L-moments, and method of maximum likelihood is compared using Kolmogorov-Smirnov (K-S) test. The overall performance of the methods POME, MLE and L-moment are found to be comparable, whereas MOM performs with the highest bias; both the entropy method and the L-moment method allows the four-parameter kappa distribution to fit the data well and the combination of the two methods can further improve the parameter estimation of the four-parameter kappa distribution.</p>


2021 ◽  
Author(s):  
Wim Verkley ◽  
Camiel Severijns

<p>Burgers and Onsager were pioneers in using statistical mechanics in the theory of turbulent fluid motion. Their approach was, however, rather different. Whereas Onsager stayed close to the energy conserving Hamiltonian systems of classical mechanics, Burgers explicitly exploited the fact that turbulent motion is forced and dissipative. The basic assumption of Burgers' approach is that forcing and dissipation balance on average, an assumption that leads to interesting conclusions concerning the statistics of turbulent flow but also to a few problems. A compilation and assessment of his work can be found in [1].</p><p>We have taken up the thread of Burgers' approach and rephrased it in terms of Jaynes' principle of maximum entropy. The principle of maximum entropy yields a  statistical description in terms of a probability density function that is as noncommittal as possible while reproducing any given expectation values. In the spirit of Burgers, these expectation values are the average energy as well as the average of the first and higher order time-derivatives of the energy (or other global quantities). In [2] the method was applied to a system devised by Lorenz . By using constraints on the average energy and its first and second order time-derivatives a satisfying description was produced of the system's statistics, including covariances between the different variables. </p><p>Burgers' approach can also be applied to the parametrization problem, i.e., the problem of how to deal statistically with scales of motion that cannot be resolved explicitly. Quite recently we showed this for two-dimensional turbulence on a doubly periodic flow domain, a system that is relevant as a first-order approximation of large-scale balanced flow in the atmosphere and oceans. Using a spectral description of the system it is straightforward to separate between resolved and unresolved scales and by using a reference model with high resolution it is possible to study how well a parametrization performs by implementing it in the same model with a lower resolution. Based on two studies [3, 4] we will show how well the principle of maximum entropy works in tackling the problem of unresolved turbulent scales.  </p><p>[1] F.T.M. Nieuwstadt and J.A. Steketee, Eds., 1995: Selected Papers of J.M. Burgers. Kluwer Academic, 650 pp. </p><p>[2] W.T.M. Verkley and C.A. Severijns, 2014: The maximum entropy principle applied to a dynamical system proposed by Lorenz. Eur. Phys. J. B, 87:7, https://doi.org/10.1140/epjb/e2013-40681-2 (open access).  </p><p>[3] W.T.M. Verkley, P.C. Kalverla and C.A. Severijns, 2016: A maximum entropy approach to the parametrization of subgrid processes in two-dimensional flow. Quarterly Journal of the Royal Meteorological Society, 142, 2273-2283, https://doi.org/10.1002/qj.2817</p><p>[4] W.T.M. Verkley, C.A. Severijns and B.A. Zwaal, 2019: A maximum entropy approach to the interaction between small and large scales in two-dimensional turbulence. Quarterly Journal of the Royal Meteorological Society, 145, 2221-2236, https://doi.org/10.1002/qj.3554</p>


Author(s):  
Martijn Veening

The maximization of entropy S within a closed system is accepted as an inevitability (as the second law of thermodynamics) by statistical inference alone. The Maximum Entropy Production Principle (MEPP) states that not only S maximizes, but $\dot{S}$ as well: a system will dissipate as fast as possible. There is still no consensus on the general validity of this MEPP, even though it shows remarkable explanatory power (both qualitatively and quantitatively), and has been empirically demonstrated for many domains. In this theoretical paper I provide a generalization of entropy gradients, to show that the MEPP actually follows from the same statistical inference, as that of the 2nd law of thermodynamics. For this generalization I only use the concepts of super-statespaces and microstate-density. These concepts also allow for the abstraction of 'Self Organizing Criticality' to a bifurcating local difference in this density, and allow for a generalization of the fundamentally unresolved concepts of 'chaos' and 'order'.


Sign in / Sign up

Export Citation Format

Share Document