Quantifying Stimulus Discriminability: A Comparison of Information Theory and Ideal Observer Analysis

2005 ◽  
Vol 17 (4) ◽  
pp. 741-778 ◽  
Author(s):  
Eric E. Thomson ◽  
William B. Kristan

Performance in sensory discrimination tasks is commonly quantified using either information theory or ideal observer analysis. These two quantitative frameworks are often assumed to be equivalent. For example, higher mutual information is said to correspond to improved performance of an ideal observer in a stimulus estimation task. To the contrary, drawing on and extending previous results, we show that five information-theoretic quantities (entropy, response-conditional entropy, specific information, equivocation, and mutual information) violate this assumption. More positively, we show how these information measures can be used to calculate upper and lower bounds on ideal observer performance, and vice versa. The results show that the mathematical resources of ideal observer analysis are preferable to information theory for evaluating performance in a stimulus discrimination task. We also discuss the applicability of information theory to questions that ideal observer analysis cannot address.

Entropy ◽  
2018 ◽  
Vol 20 (7) ◽  
pp. 540 ◽  
Author(s):  
Subhashis Hazarika ◽  
Ayan Biswas ◽  
Soumya Dutta ◽  
Han-Wei Shen

Uncertainty of scalar values in an ensemble dataset is often represented by the collection of their corresponding isocontours. Various techniques such as contour-boxplot, contour variability plot, glyphs and probabilistic marching-cubes have been proposed to analyze and visualize ensemble isocontours. All these techniques assume that a scalar value of interest is already known to the user. Not much work has been done in guiding users to select the scalar values for such uncertainty analysis. Moreover, analyzing and visualizing a large collection of ensemble isocontours for a selected scalar value has its own challenges. Interpreting the visualizations of such large collections of isocontours is also a difficult task. In this work, we propose a new information-theoretic approach towards addressing these issues. Using specific information measures that estimate the predictability and surprise of specific scalar values, we evaluate the overall uncertainty associated with all the scalar values in an ensemble system. This helps the scientist to understand the effects of uncertainty on different data features. To understand in finer details the contribution of individual members towards the uncertainty of the ensemble isocontours of a selected scalar value, we propose a conditional entropy based algorithm to quantify the individual contributions. This can help simplify analysis and visualization for systems with more members by identifying the members contributing the most towards overall uncertainty. We demonstrate the efficacy of our method by applying it on real-world datasets from material sciences, weather forecasting and ocean simulation experiments.


2018 ◽  
Vol 33 (3) ◽  
pp. 438-459 ◽  
Author(s):  
S. Zarezadeh ◽  
M. Asadi ◽  
S. Eftekhar

The signature matrix of an n-component three-state network (system), which depends only on the network structure, is a useful tool for comparing the reliability and stochastic properties of networks. In this paper, we consider a three-state network with states up, partial performance, and down. We assume that the network remains in state up, for a random time T1 and then moves to state partial performance until it fails at time T>T1. The signature-based expressions for the conditional entropy of T given T1, the joint entropy, Kullback-Leibler (K-L) information, and mutual information of the lifetimes T and T1 are presented. It is shown that the K-L information, and mutual information between T1 and T depend only on the network structure (i.e., depend only to the signature matrix of the network). Some signature-based stochastic comparisons are also made to compare the K-L of the state lifetimes in two different three-state networks. Upper and lower bounds for the K-L divergence and mutual information between T1 and T are investigated. Finally the results are extended to n-component multi-state networks. Several examples are examined graphically and numerically.


This chapter presents a higher-order-logic formalization of the main concepts of information theory (Cover & Thomas, 1991), such as the Shannon entropy and mutual information, using the formalization of the foundational theories of measure, Lebesgue integration, and probability. The main results of the chapter include the formalizations of the Radon-Nikodym derivative and the Kullback-Leibler (KL) divergence (Coble, 2010). The latter provides a unified framework based on which most of the commonly used measures of information can be defined. The chapter then provides the general definitions that are valid for both discrete and continuous cases and then proves the corresponding reduced expressions where the measures considered are absolutely continuous over finite spaces.


2015 ◽  
Vol 105 (1) ◽  
pp. 9-17 ◽  
Author(s):  
G. Hughes ◽  
N. McRoberts ◽  
F. J. Burnett

Binary predictors are used in a wide range of crop protection decision-making applications. Such predictors provide a simple analytical apparatus for the formulation of evidence related to risk factors, for use in the process of Bayesian updating of probabilities of crop disease. For diagrammatic interpretation of diagnostic probabilities, the receiver operating characteristic is available. Here, we view binary predictors from the perspective of diagnostic information. After a brief introduction to the basic information theoretic concepts of entropy and expected mutual information, we use an example data set to provide diagrammatic interpretations of expected mutual information, relative entropy, information inaccuracy, information updating, and specific information. Our information graphs also illustrate correspondences between diagnostic information and diagnostic probabilities.


2021 ◽  
Author(s):  
CHU PAN

Using information measures to infer biological regulatory networks can observe nonlinear relationship between variables, but it is computationally challenging and there is currently no convenient tool available. We here describe an information theory R package named Informeasure that devotes to quantifying nonlinear dependence between variables in biological regulatory networks from an information theory perspective. This package compiles most of the information measures currently available: mutual information, conditional mutual information, interaction information, partial information decomposition and part mutual information. The first estimator is used to infer bivariate networks while the last four estimators are dedicated to analysis of trivariate networks. The base installation of this turn-key package allows users to approach these information measures out of the box. Informeasure is implemented in R program and is available as an R/Bioconductor package at https://bioconductor.org/packages/Informeasure.


2021 ◽  
Vol 10 (2) ◽  
pp. 82-102
Author(s):  
Omdutt Sharma ◽  
Pratiksha Tiwari ◽  
Priti Gupta

Information theory is a tool to measure uncertainty; these days, it is used to solve various challenging problems that involve hybridization of information theory with the fuzzy set, rough sets, vague sets, etc. In order to solve challenging problems in scientific data analysis and visualization recently, various authors are working on hybrid measures of information theory. In this paper, using the relation between information measures, some measures are proposed for the fuzzy rough set. Firstly, an entropy measure is derived using the fuzzy rough similarity measure, and then corresponding to this entropy measure, some other measures like mutual information measure, joint entropy measure, and conditional entropy measure are also proposed. Some properties of these measures are also studied. Later, the proposed measure is compared with some existing measures to prove its efficiency. Further, the proposed measures are applied to pattern recognition, medical diagnoses, and a real-life decision-making problem for incorporating software in the curriculum at the Department of Statistics.


Author(s):  
QINGHUA HU ◽  
DAREN YU

Yager's entropy was proposed to compute the information of fuzzy indiscernibility relation. In this paper we present a novel interpretation of Yager's entropy in discernibility power of a relation point of view. Then some basic definitions in Shannon's information theory are generalized based on Yager's entropy. We introduce joint entropy, conditional entropy, mutual information and relative entropy to compute the information changes for fuzzy indiscerniblity relation operations. Conditional entropy and relative conditional entropy are proposed to measure the information increment, which is interpreted as the significance of an attribute in fuzzy rough set model. As an application, we redefine independency of an attribute set, reduct, relative reduct in fuzzy rough set model based on Yager's entropy. Some experimental results show the proposed approach is suitable for fuzzy and numeric data reduction.


1998 ◽  
Vol 10 (7) ◽  
pp. 1731-1757 ◽  
Author(s):  
Nicolas Brunel ◽  
Jean-Pierre Nadal

In the context of parameter estimation and model selection, it is only quite recently that a direct link between the Fisher information and information-theoretic quantities has been exhibited. We give an interpretation of this link within the standard framework of information theory. We show that in the context of population coding, the mutual information between the activity of a large array of neurons and a stimulus to which the neurons are tuned is naturally related to the Fisher information. In the light of this result, we consider the optimization of the tuning curves parameters in the case of neurons responding to a stimulus represented by an angular variable.


SPE Journal ◽  
2014 ◽  
Vol 19 (04) ◽  
pp. 648-661 ◽  
Author(s):  
Duc H. Le ◽  
Albert C. Reynolds

Summary Given a suite of potential surveillance operations, we define surveillance optimization as the problem of choosing the operation that gives the minimum expected value of P90 minus P10 (i.e., P90 – P10) of a specified reservoir variable J (e.g., cumulative oil production) that will be obtained by conditioning J to the observed data. Two questions can be posed: (1) Which surveillance operation is expected to provide the greatest uncertainty reduction in J? and (2) What is the expected value of the reduction in uncertainty that would be achieved if we were to undertake each surveillance operation to collect the associated data and then history match the data obtained? In this work, we extend and apply a conceptual idea that we recently proposed for surveillance optimization to 2D and 3D waterflooding problems. Our method is based on information theory in which the mutual information between J and the random observed data vector Dobs is estimated by use of an ensemble of prior reservoir models. This mutual information reflects the strength of the relationship between J and the potential observed data and provides a qualitative answer to Question 1. Question 2 is answered by calculating the conditional entropy of J to generate an approximation of the expected value of the reduction in (P90 – P10) of J. The reliability of our method depends on obtaining a good estimate of the mutual information. We consider several ways to estimate the mutual information and suggest how a good estimate can be chosen. We validate the results of our proposed method with an exhaustive history-matching procedure. The methodology provides an approximate way to decide the data that should be collected to maximize the uncertainty reduction in a specified reservoir variable and to estimate the reduction in uncertainty that could be obtained. We expect this paper will stimulate significant research on the application of information theory and lead to practical methods and workflows for surveillance optimization.


2017 ◽  
Vol 107 (2) ◽  
pp. 158-162 ◽  
Author(s):  
G. Hughes ◽  
N. McRoberts ◽  
F. J. Burnett

Predictive systems in disease management often incorporate weather data among the disease risk factors, and sometimes this comes in the form of forecast weather data rather than observed weather data. In such cases, it is useful to have an evaluation of the operational weather forecast, in addition to the evaluation of the disease forecasts provided by the predictive system. Typically, weather forecasts and disease forecasts are evaluated using different methodologies. However, the information theoretic quantity expected mutual information provides a basis for evaluating both kinds of forecast. Expected mutual information is an appropriate metric for the average performance of a predictive system over a set of forecasts. Both relative entropy (a divergence, measuring information gain) and specific information (an entropy difference, measuring change in uncertainty) provide a basis for the assessment of individual forecasts.


Sign in / Sign up

Export Citation Format

Share Document