Advances in Info-Metrics
Latest Publications


TOTAL DOCUMENTS

19
(FIVE YEARS 19)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780190636685, 9780190636722

2020 ◽  
pp. 464-490
Author(s):  
Miquel Feixas ◽  
Mateu Sbert

Around seventy years ago, Claude Shannon, who was working at Bell Laboratories, introduced information theory with the main purpose of dealing with the communication channel between source and receiver. The communication channel, or information channel as it later became known, establishes the shared information between the source or input and the receiver or output, both of which are represented by random variables, that is, by probability distributions over their possible states. The generality and flexibility of the information channel concept can be robustly applied to numerous, different areas of science and technology, even the social sciences. In this chapter, we will present examples of its application to select the best viewpoints of an object, to segment an image, and to compute the global illumination of a three-dimensional virtual scene. We hope that our examples will illustrate how the practitioners of different disciplines can use it for the purpose of organizing and understanding the interplay of information between the corresponding source and receiver.


2020 ◽  
pp. 240-263
Author(s):  
Rosa Bernardini Papalia ◽  
Esteban Fernandez-Vazquez

Statistical information for empirical analysis is frequently available at a higher level of aggregation than is desired. The spatial disaggregation of the socioeconomic data is considered complex due to the inherent spatial properties and relationships of the spatial data, namely, spatial dependence and spatial heterogeneity. The spatial dependence, spatial heterogeneity, and effect of scale produce major technical issues that largely impact the accuracy of the regional forecast disaggregation. In this chapter, we propose entropy-based spatial forecast disaggregation methods for count areal data that use all available information at each level of aggregation even if it is incomplete. The proposed methods are validated through Monte Carlo simulations using ancillary information. An empirical application to real data is also presented.


2020 ◽  
pp. 349-384
Author(s):  
Martyn Andrews ◽  
Alastair R. Hall ◽  
Rabeya Khatoon ◽  
James Lincoln

Motivated by empirical analyses in economics using repeated cross-sectional data, we propose info-metric methods (IM) for estimation of the parameters of statistical models based on the information in population moment conditions that hold at group level. The info-metric estimation can be viewed as the primary approach to a constrained optimization. The estimators can also be obtained via the dual approach to this optimization, known as generalized empirical likelihood (GEL). In Andrews, Hall, Khatoon and Lincoln (2019), we provide a comprehensive framework for inference based on GEL with the grouped-specific moment conditions. In this chapter, we compare the computational requirements of the primary and dual approaches. We also describe the IM/GEL inference framework in the context of a linear regression model that is estimated using the information that the mean of the error is zero for each group. For the latter setting, we use analytical arguments and a small simulation study to compare the properties of IM/GEL-based inferences to those of inferences based on certain extant methods. The IM/GEL methods are illustrated through an application to estimation of the returns to education in which the groups are defined via information on family background.


2020 ◽  
pp. 493-506
Author(s):  
Yundong Tu

In this chapter, I propose a model averaging estimation of nonparametric models based on Shannon’s entropy measure. The choice of weights in the averaging estimator is implemented bya maximizing the Shannon’s entropy measure which aggregates both model uncertainty and data uncertainty. Finite sample simulation studies show that the proposed averaging estimator outperforms the local linear least square estimator in terms of mean-squared errors and outperforms the Mallows averaging estimator of Hansen (2007) when the signal-to-noise ratio is low. An empirical example to apply the proposed estimator is provided to study the wage equation and illustrates its superiority in out-of-sample forecasts.


2020 ◽  
pp. 400-430
Author(s):  
John Geweke ◽  
Garland Durham

Rényi divergence is a natural way to measure the rate of information flow in contexts like Bayesian updating. This chapter shows how Monte Carlo integration can be used to measure Rényi divergence when (as is often the case) only kernels of the relevant probability densities are available. The chapter further demonstrates that Rényi divergence is central to the convergence and efficiency of Monte Carlo integration procedures in which information flow is controlled. It uses this perspective to develop more flexible approaches to the controlled introduction of information; in the limited set of examples considered here, these alternatives enhance efficiency.


2020 ◽  
pp. 161-184
Author(s):  
John Harte

A major goal of ecology is to predict patterns and changes in the abundance, distribution, and energetics of individuals and species in ecosystems. The maximum entropy theory of ecology (METE) predicts the functional forms and parameter values describing the central metrics of macroecology, including the distribution of abundances over all the species, metabolic rates over all individuals, spatial aggregation of individuals within species, and the dependence of species diversity on areas of habitat. In METE, the maximum entropy inference procedure is implemented using the constraints imposed by a few macroscopic state variables, including the number of species, total abundance, and total metabolic rate in an ecological community. Although the theory adequately predicts pervasive empirical patterns in relatively static ecosystems, there is mounting evidence that in ecosystems in which the state variables are changing rapidly, many of the predictions of METE systematically fail. Here we discuss the underlying logic and predictions of the static theory and then describe progress toward achieving a dynamic theory (DynaMETE) of macroecology capable of describing ecosystems undergoing rapid change as a result of disturbance. An emphasis throughout is on the tension between, and reconciliation of, two legitimate perspectives on ecology: that of the natural historian who studies the uniqueness of every ecosystem and the theorist seeking unification and generality.


2020 ◽  
pp. 145-158
Author(s):  
George Judge

In this chapter, we emphasize the connection between adaptive economic behavior and causal entropy maximization and suggest methods consistent with information recovery in an open dynamic economic system. This entropy-based causal adaptive behavior framework permits the use of a family of information-theoretic estimation and inference methods as a basis for linking the data and the unknown and unobservable system behavioral parameters. Several econometric models and applications are demonstrated, and economic-econometric implications of the information-theoretic approach are discussed. We end the chapter with a question concerning the use of traditional estimation and inference methods that do not have a connection to economic behavior and choice data.


2020 ◽  
pp. 81-112
Author(s):  
Bryan C. Daniels

From neurons to insects to societies, across biology we see impressive feats of collective information processing. What strategies do these systems use to perform useful computations? Moving toward an answer to this question, this chapter focuses on common challenges in inferring models of complicated distributed systems and how the perspective of information theory and statistical physics is useful for understanding collective behavior.


2020 ◽  
pp. 264-289
Author(s):  
F. Douglas Foster ◽  
Michael Stutzer

This chapter provides a simple method of ranking mutual funds’ probabilities of outperforming a benchmark portfolio. We show that ranking fund performance in this way is identical to ranking each fund’s portfolio with a generalized entropy, equivalent to an expected generalized power utility index that uses a risk-aversion coefficient specific to that fund. When the return differential between fund and benchmark portfolio (log gross) returns follows a Gaussian process, this ranking is equivalent to using a simple modification of the Information Ratio (1998). We develop and apply feasible parametric and nonparametric estimators to rank the performance of the small fraction of mutual funds that (from the results of an hypothesis test) could outperform the S&P 500 Index in the long run, and to estimate the fund-specific risk-aversion coefficients required for the ranking. We also argue that an auxiliary hypothesis that fund managers attempt to maximize the outperformance probability is no less plausible than an extant alternative behavioral hypothesis and is more parsimoneously parametrized.


Author(s):  
J. Michael Dunn ◽  
Amos Golan

In this chapter, we are interested in understanding the nature of information and its value. We focus on information that is used for making decisions, including related activities such as constructing models, performing inferences, and making predictions. Our discussion is mostly qualitative, and it touches on certain aspects of information as related to the sender, receiver, and a possible observer. Although our emphasis is on shedding more light on the concept of information for making decisions, we are not concerned here with the exact details of the decision process, or information processing itself. In addition to discussing information, our expedition takes us through the traditional notions of utility, prices, and risk, all of which, under certain conditions, relate to the value of information. Our main conclusion is that the value of information (used in decision making) is relative and subjective. Since information is relative, it can have more than one value, say a value for the sender, a value for the receiver, or even different values for different senders and receivers, and various values for various “eavesdroppers.” Of course, the value might be zero for any of these. Importantly, that value is inversely related to risk when the information is used for decision making. Although this conclusion is likely expected, we did argue for it in a way that relies on some fundamentals about both value and information.


Sign in / Sign up

Export Citation Format

Share Document