A Note on the Residual Entropy Function

1993 ◽  
Vol 7 (3) ◽  
pp. 413-420 ◽  
Author(s):  
Pietro Muliere ◽  
Giovanni Parmigiani ◽  
Nicholas G. Polson

Interest in the informational content of truncation motivates the study of the residual entropy function, that is, the entropy of a right truncated random variable as a function of the truncation point. In this note we show that, under mild regularity conditions, the residual entropy function characterizes the probability distribution. We also derive relationships among residual entropy, monotonicity of the failure rate, and stochastic dominance. Information theoretic measures of distances between distributions are also revisited from a similar perspective. In particular, we study the residual divergence between two positive random variables and investigate some of its monotonicity properties. The results are relevant to information theory, reliability theory, search problems, and experimental design.

Author(s):  
M. Vidyasagar

This chapter provides an introduction to some elementary aspects of information theory, including entropy in its various forms. Entropy refers to the level of uncertainty associated with a random variable (or more precisely, the probability distribution of the random variable). When there are two or more random variables, it is worthwhile to study the conditional entropy of one random variable with respect to another. The last concept is relative entropy, also known as the Kullback–Leibler divergence, which measures the “disparity” between two probability distributions. The chapter first considers convex and concave functions before discussing the properties of the entropy function, conditional entropy, uniqueness of the entropy function, and the Kullback–Leibler divergence.


2017 ◽  
Vol 28 (7) ◽  
pp. 954-966 ◽  
Author(s):  
Colin Bannard ◽  
Marla Rosner ◽  
Danielle Matthews

Of all the things a person could say in a given situation, what determines what is worth saying? Greenfield’s principle of informativeness states that right from the onset of language, humans selectively comment on whatever they find unexpected. In this article, we quantify this tendency using information-theoretic measures and report on a study in which we tested the counterintuitive prediction that children will produce words that have a low frequency given the context, because these will be most informative. Using corpora of child-directed speech, we identified adjectives that varied in how informative (i.e., unexpected) they were given the noun they modified. In an initial experiment ( N = 31) and in a replication ( N = 13), 3-year-olds heard an experimenter use these adjectives to describe pictures. The children’s task was then to describe the pictures to another person. As the information content of the experimenter’s adjective increased, so did children’s tendency to comment on the feature that adjective had encoded. Furthermore, our analyses suggest that children balance informativeness with a competing drive to ease production.


2011 ◽  
Vol 61 (5) ◽  
pp. 415 ◽  
Author(s):  
Madasu Hanmandlu ◽  
Anirban Das

<p>Content-based image retrieval focuses on intuitive and efficient methods for retrieving images from databases based on the content of the images. A new entropy function that serves as a measure of information content in an image termed as 'an information theoretic measure' is devised in this paper. Among the various query paradigms, 'query by example' (QBE) is adopted to set a query image for retrieval from a large image database. In this paper, colour and texture features are extracted using the new entropy function and the dominant colour is considered as a visual feature for a particular set of images. Thus colour and texture features constitute the two-dimensional feature vector for indexing the images. The low dimensionality of the feature vector speeds up the atomic query. Indices in a large database system help retrieve the images relevant to the query image without looking at every image in the database. The entropy values of colour and texture and the dominant colour are considered for measuring the similarity. The utility of the proposed image retrieval system based on the information theoretic measures is demonstrated on a benchmark dataset.</p><p><strong>Defence Science Journal, 2011, 61(5), pp.415-430</strong><strong><strong>, DOI:http://dx.doi.org/10.14429/dsj.61.1177</strong></strong></p>


1973 ◽  
Vol 38 (2) ◽  
pp. 131-149 ◽  
Author(s):  
John S. Justeson

AbstractA framework is established for the application of information-theoretic concepts to the study of archaeological inference, ultimately to provide an estimate of the degree to which archaeologists, or anthropologists in general, can provide legitimate answers to the questions they investigate. Particular information-theoretic measures are applied to the design elements on the ceramics of a southwestern pueblo to show the methodological utility of information theory in helping to reach closer to that limit.


2016 ◽  
Vol 113 (51) ◽  
pp. 14817-14822 ◽  
Author(s):  
Masafumi Oizumi ◽  
Naotsugu Tsuchiya ◽  
Shun-ichi Amari

Assessment of causal influences is a ubiquitous and important subject across diverse research fields. Drawn from consciousness studies, integrated information is a measure that defines integration as the degree of causal influences among elements. Whereas pairwise causal influences between elements can be quantified with existing methods, quantifying multiple influences among many elements poses two major mathematical difficulties. First, overestimation occurs due to interdependence among influences if each influence is separately quantified in a part-based manner and then simply summed over. Second, it is difficult to isolate causal influences while avoiding noncausal confounding influences. To resolve these difficulties, we propose a theoretical framework based on information geometry for the quantification of multiple causal influences with a holistic approach. We derive a measure of integrated information, which is geometrically interpreted as the divergence between the actual probability distribution of a system and an approximated probability distribution where causal influences among elements are statistically disconnected. This framework provides intuitive geometric interpretations harmonizing various information theoretic measures in a unified manner, including mutual information, transfer entropy, stochastic interaction, and integrated information, each of which is characterized by how causal influences are disconnected. In addition to the mathematical assessment of consciousness, our framework should help to analyze causal relationships in complex systems in a complete and hierarchical manner.


Author(s):  
Ryan Ka Yau Lai ◽  
Youngah Do

This article explores a method of creating confidence bounds for information-theoretic measures in linguistics, such as entropy, Kullback-Leibler Divergence (KLD), and mutual information. We show that a useful measure of uncertainty can be derived from simple statistical principles, namely the asymptotic distribution of the maximum likelihood estimator (MLE) and the delta method. Three case studies from phonology and corpus linguistics are used to demonstrate how to apply it and examine its robustness against common violations of its assumptions in linguistics, such as insufficient sample size and non-independence of data points.


2020 ◽  
Vol 92 (6) ◽  
pp. 51-58
Author(s):  
S.A. SOLOVYEV ◽  

The article describes a method for reliability (probability of non-failure) analysis of structural elements based on p-boxes. An algorithm for constructing two p-blocks is shown. First p-box is used in the absence of information about the probability distribution shape of a random variable. Second p-box is used for a certain probability distribution function but with inaccurate (interval) function parameters. The algorithm for reliability analysis is presented on a numerical example of the reliability analysis for a flexural wooden beam by wood strength criterion. The result of the reliability analysis is an interval of the non-failure probability boundaries. Recommendations are given for narrowing the reliability boundaries which can reduce epistemic uncertainty. On the basis of the proposed approach, particular methods for reliability analysis for any structural elements can be developed. Design equations are given for a comprehensive assessment of the structural element reliability as a system taking into account all the criteria of limit states.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
James M. Kunert-Graf ◽  
Nikita A. Sakhanenko ◽  
David J. Galas

Abstract Background Permutation testing is often considered the “gold standard” for multi-test significance analysis, as it is an exact test requiring few assumptions about the distribution being computed. However, it can be computationally very expensive, particularly in its naive form in which the full analysis pipeline is re-run after permuting the phenotype labels. This can become intractable in multi-locus genome-wide association studies (GWAS), in which the number of potential interactions to be tested is combinatorially large. Results In this paper, we develop an approach for permutation testing in multi-locus GWAS, specifically focusing on SNP–SNP-phenotype interactions using multivariable measures that can be computed from frequency count tables, such as those based in Information Theory. We find that the computational bottleneck in this process is the construction of the count tables themselves, and that this step can be eliminated at each iteration of the permutation testing by transforming the count tables directly. This leads to a speed-up by a factor of over 103 for a typical permutation test compared to the naive approach. Additionally, this approach is insensitive to the number of samples making it suitable for datasets with large number of samples. Conclusions The proliferation of large-scale datasets with genotype data for hundreds of thousands of individuals enables new and more powerful approaches for the detection of multi-locus genotype-phenotype interactions. Our approach significantly improves the computational tractability of permutation testing for these studies. Moreover, our approach is insensitive to the large number of samples in these modern datasets. The code for performing these computations and replicating the figures in this paper is freely available at https://github.com/kunert/permute-counts.


Author(s):  
Laurie Beth Feldman ◽  
Vidhushini Srinivasan ◽  
Rachel B. Fernandes ◽  
Samira Shaikh

Abstract Twitter data from a crisis that impacted many English–Spanish bilinguals show that the direction of codeswitches is associated with the statistically documented tendency of single speakers to prefer one language over another in their tweets, as gleaned from their tweeting history. Further, lexical diversity, a measure of vocabulary richness derived from information-theoretic measures of uncertainty in communication, is greater in proximity to a codeswitch than in productions remote from a switch. The prospects of a role for lexical diversity in characterizing the conditions for a language switch suggest that communicative precision may induce conditions that attenuate constraints against language mixing.


Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 858
Author(s):  
Dongshan He ◽  
Qingyu Cai

In this paper, we present a derivation of the black hole area entropy with the relationship between entropy and information. The curved space of a black hole allows objects to be imaged in the same way as camera lenses. The maximal information that a black hole can gain is limited by both the Compton wavelength of the object and the diameter of the black hole. When an object falls into a black hole, its information disappears due to the no-hair theorem, and the entropy of the black hole increases correspondingly. The area entropy of a black hole can thus be obtained, which indicates that the Bekenstein–Hawking entropy is information entropy rather than thermodynamic entropy. The quantum corrections of black hole entropy are also obtained according to the limit of Compton wavelength of the captured particles, which makes the mass of a black hole naturally quantized. Our work provides an information-theoretic perspective for understanding the nature of black hole entropy.


Sign in / Sign up

Export Citation Format

Share Document