scholarly journals Some Illustrations of Information Geometry in Biology and Physics

Author(s):  
C. T. J. Dodson

Many real processes have stochastic features which seem to be representable in some intuitive sense as `close to Poisson’, `nearly random’, `nearly uniform’ or with binary variables `nearly independent’. Each of those particular reference states, defined by an equation, is unstable in the formal sense, but it is passed through or hovered about by the observed process. Information geometry gives precise meaning for nearness and neighbourhood in a state space of processes, naturally quantifying proximity of a process to a particular state via an information theoretic metric structure on smoothly parametrized families of probability density functions. We illustrate some aspects of the methodology through case studies: inhomogeneous statistical evolutionary rate processes for epidemics, amino acid spacings along protein chains, constrained disordering of crystals, distinguishing nearby signal distributions and testing pseudorandom number generators.

2007 ◽  
Vol 64 (6) ◽  
pp. 2012-2028 ◽  
Author(s):  
A. R. Jameson

Most variables in meteorology are statistically heterogeneous. The statistics of data from several different locations, then, can be thought of as an amalgamation of information contained in several contributing probability density functions (PDFs) having different sets of parameters, different parametric forms, and different mean values. The frequency distribution of such data, then, will often be multimodal. Usually, however, in order to achieve better sampling, measurements of these variables over an entire set of data gathered at widely disparate locations are processed as though the data were statistically homogeneous, that is, as though they were fully characterized by just one PDF and one single set of parameters having one mean value. Is there, instead, a better way of treating the data in a manner that is consistent with this statistical heterogeneity? This question is addressed here using a statistical inversion technique developed by Tarantola based upon Bayesian methodology. Two examples of disdrometer measurements in real rain, one 16 h and the other 3 min long, reveal the presence of multiple mean values of the counts at all the different drop sizes. In both cases the heterogeneous rain can be decomposed into five–seven statistically homogeneous components, each characterized by its own steady drop size distribution. Concepts such as stratiform versus convective rain can be given more precise meaning in terms of the contributions each component makes to the rain. Furthermore, this discovery permits the explicit inclusion of statistical heterogeneity into some analytic theories.


Entropy ◽  
2018 ◽  
Vol 20 (8) ◽  
pp. 574 ◽  
Author(s):  
Eun-jin Kim

Stochastic processes are ubiquitous in nature and laboratories, and play a major role across traditional disciplinary boundaries. These stochastic processes are described by different variables and are thus very system-specific. In order to elucidate underlying principles governing different phenomena, it is extremely valuable to utilise a mathematical tool that is not specific to a particular system. We provide such a tool based on information geometry by quantifying the similarity and disparity between Probability Density Functions (PDFs) by a metric such that the distance between two PDFs increases with the disparity between them. Specifically, we invoke the information length L(t) to quantify information change associated with a time-dependent PDF that depends on time. L(t) is uniquely defined as a function of time for a given initial condition. We demonstrate the utility of L(t) in understanding information change and attractor structure in classical and quantum systems.


2021 ◽  
Vol 13 (12) ◽  
pp. 2307
Author(s):  
J. Javier Gorgoso-Varela ◽  
Rafael Alonso Ponce ◽  
Francisco Rodríguez-Puerta

The diameter distributions of trees in 50 temporary sample plots (TSPs) established in Pinus halepensis Mill. stands were recovered from LiDAR metrics by using six probability density functions (PDFs): the Weibull (2P and 3P), Johnson’s SB, beta, generalized beta and gamma-2P functions. The parameters were recovered from the first and the second moments of the distributions (mean and variance, respectively) by using parameter recovery models (PRM). Linear models were used to predict both moments from LiDAR data. In recovering the functions, the location parameters of the distributions were predetermined as the minimum diameter inventoried, and scale parameters were established as the maximum diameters predicted from LiDAR metrics. The Kolmogorov–Smirnov (KS) statistic (Dn), number of acceptances by the KS test, the Cramér von Misses (W2) statistic, bias and mean square error (MSE) were used to evaluate the goodness of fits. The fits for the six recovered functions were compared with the fits to all measured data from 58 TSPs (LiDAR metrics could only be extracted from 50 of the plots). In the fitting phase, the location parameters were fixed at a suitable value determined according to the forestry literature (0.75·dmin). The linear models used to recover the two moments of the distributions and the maximum diameters determined from LiDAR data were accurate, with R2 values of 0.750, 0.724 and 0.873 for dg, dmed and dmax. Reasonable results were obtained with all six recovered functions. The goodness-of-fit statistics indicated that the beta function was the most accurate, followed by the generalized beta function. The Weibull-3P function provided the poorest fits and the Weibull-2P and Johnson’s SB also yielded poor fits to the data.


2021 ◽  
Vol 502 (2) ◽  
pp. 1768-1784
Author(s):  
Yue Hu ◽  
A Lazarian

ABSTRACT The velocity gradients technique (VGT) and the probability density functions (PDFs) of mass density are tools to study turbulence, magnetic fields, and self-gravity in molecular clouds. However, self-absorption can significantly make the observed intensity different from the column density structures. In this work, we study the effects of self-absorption on the VGT and the intensity PDFs utilizing three synthetic emission lines of CO isotopologues 12CO (1–0), 13CO (1–0), and C18O (1–0). We confirm that the performance of VGT is insensitive to the radiative transfer effect. We numerically show the possibility of constructing 3D magnetic fields tomography through VGT. We find that the intensity PDFs change their shape from the pure lognormal to a distribution that exhibits a power-law tail depending on the optical depth for supersonic turbulence. We conclude the change of CO isotopologues’ intensity PDFs can be independent of self-gravity, which makes the intensity PDFs less reliable in identifying gravitational collapsing regions. We compute the intensity PDFs for a star-forming region NGC 1333 and find the change of intensity PDFs in observation agrees with our numerical results. The synergy of VGT and the column density PDFs confirms that the self-gravitating gas occupies a large volume in NGC 1333.


2020 ◽  
Vol 8 (1) ◽  
pp. 45-69
Author(s):  
Eckhard Liebscher ◽  
Wolf-Dieter Richter

AbstractWe prove and describe in great detail a general method for constructing a wide range of multivariate probability density functions. We introduce probabilistic models for a large variety of clouds of multivariate data points. In the present paper, the focus is on star-shaped distributions of an arbitrary dimension, where in case of spherical distributions dependence is modeled by a non-Gaussian density generating function.


Sign in / Sign up

Export Citation Format

Share Document