scholarly journals Confounding Ghost Channels and Causality: A New Approach to Causal Information Flows

Author(s):  
Nihat Ay

AbstractInformation theory provides a fundamental framework for the quantification of information flows through channels, formally Markov kernels. However, quantities such as mutual information and conditional mutual information do not necessarily reflect the causal nature of such flows. We argue that this is often the result of conditioning based on σ-algebras that are not associated with the given channels. We propose a version of the (conditional) mutual information based on families of σ-algebras that are coupled with the underlying channel. This leads to filtrations which allow us to prove a corresponding causal chain rule as a basic requirement within the presented approach.

Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 862
Author(s):  
Sungyeop Lee ◽  
Junghyo Jo

Deep learning methods have had outstanding performances in various fields. A fundamental query is why they are so effective. Information theory provides a potential answer by interpreting the learning process as the information transmission and compression of data. The information flows can be visualized on the information plane of the mutual information among the input, hidden, and output layers. In this study, we examine how the information flows are shaped by the network parameters, such as depth, sparsity, weight constraints, and hidden representations. Here, we adopt autoencoders as models of deep learning, because (i) they have clear guidelines for their information flows, and (ii) they have various species, such as vanilla, sparse, tied, variational, and label autoencoders. We measured their information flows using Rényi’s matrix-based α-order entropy functional. As learning progresses, they show a typical fitting phase where the amounts of input-to-hidden and hidden-to-output mutual information both increase. In the last stage of learning, however, some autoencoders show a simplifying phase, previously called the “compression phase”, where input-to-hidden mutual information diminishes. In particular, the sparsity regularization of hidden activities amplifies the simplifying phase. However, tied, variational, and label autoencoders do not have a simplifying phase. Nevertheless, all autoencoders have similar reconstruction errors for training and test data. Thus, the simplifying phase does not seem to be necessary for the generalization of learning.


2020 ◽  
Author(s):  
Mireille Conrad ◽  
Renaud B Jolivet

AbstractInformation theory has become an essential tool of modern neuroscience. It can however be difficult to apply in experimental contexts when acquisition of very large datasets is prohibitive. Here, we compare the relative performance of two information theoretic measures, mutual information and transfer entropy, for the analysis of information flow and energetic consumption at synapses. We show that transfer entropy outperforms mutual information in terms of reliability of estimates for small datasets. However, we also show that a detailed understanding of the underlying neuronal biophysics is essential for properly interpreting the results obtained with transfer entropy. We conclude that when time and experimental conditions permit, mutual information might provide an easier to interpret alternative. Finally, we apply both measures to the study of energetic optimality of information flow at thalamic relay synapses in the visual pathway. We show that both measures recapitulate the experimental finding that these synapses are tuned to optimally balance information flowing through them with the energetic consumption associated with that synaptic and neuronal activity. Our results highlight the importance of conducting systematic computational studies prior to applying information theoretic tools to experimental data.Author summaryInformation theory has become an essential tool of modern neuroscience. It is being routinely used to evaluate how much information flows from external stimuli to various brain regions or individual neurons. It is also used to evaluate how information flows between brain regions, between neurons, across synapses, or in neural networks. Information theory offers multiple measures to do that. Two of the most popular are mutual information and transfer entropy. While these measures are related to each other, they differ in one important aspect: transfer entropy reports a directional flow of information, as mutual information does not. Here, we proceed to a systematic evaluation of their respective performances and trade-offs from the perspective of an experimentalist looking to apply these measures to binarized spike trains. We show that transfer entropy might be a better choice than mutual information when time for experimental data collection is limited, as it appears less affected by systematic biases induced by a relative lack of data. Transmission delays and integration properties of the output neuron can however complicate this picture, and we provide an example of the effect this has on both measures. We conclude that when time and experimental conditions permit, mutual information – especially when estimated using a method referred to as the ‘direct’ method – might provide an easier to interpret alternative. Finally, we apply both measures in the biophysical context of evaluating the energetic optimality of information flow at thalamic relay synapses in the visual pathway. We show that both measures capture the original experimental finding that those synapses are tuned to optimally balance information flowing through them with the concomitant energetic consumption associated with that synaptic and neuronal activity.


Author(s):  
David Sutter ◽  
Omar Fawzi ◽  
Renato Renner

A central question in quantum information theory is to determine how well lost information can be reconstructed. Crucially, the corresponding recovery operation should perform well without knowing the information to be reconstructed. In this work, we show that the quantum conditional mutual information measures the performance of such recovery operations. More precisely, we prove that the conditional mutual information I ( A : C | B ) of a tripartite quantum state ρ ABC can be bounded from below by its distance to the closest recovered state R B → B C ( ρ A B ) , where the C -part is reconstructed from the B -part only and the recovery map R B → B C merely depends on ρ BC . One particular application of this result implies the equivalence between two different approaches to define topological order in quantum systems.


2013 ◽  
Vol 135 (6) ◽  
Author(s):  
Guoliang Wang ◽  
Hongyi Li

This paper considers the H∞ control problem for a class of singular Markovian jump systems (SMJSs), where the jumping signal is not always available. The main contribution of this paper introduces a new approach to a mode-independent (MI) H∞ controller by exploiting the nonfragile method. Based on the given method, a unified control approach establishing a direct connection between mode-dependent (MD) and mode-independent controllers is presented, where both existence conditions are given in terms of linear matrix inequalities. Moreover, another three cases of transition probability rate matrix (TRPM) with elementwise bounded uncertainties, being partially unknown and to be designed are analyzed, respectively. Numerical examples are used to demonstrate the effectiveness of the proposed methods.


PLoS ONE ◽  
2016 ◽  
Vol 11 (12) ◽  
pp. e0166868 ◽  
Author(s):  
Osvaldo A. Rosso ◽  
Raydonal Ospina ◽  
Alejandro C. Frery

2021 ◽  
Vol 2021 (9) ◽  
Author(s):  
Alex May

Abstract We prove a theorem showing that the existence of “private” curves in the bulk of AdS implies two regions of the dual CFT share strong correlations. A private curve is a causal curve which avoids the entanglement wedge of a specified boundary region $$ \mathcal{U} $$ U . The implied correlation is measured by the conditional mutual information $$ I\left({\mathcal{V}}_1:\left.{\mathcal{V}}_2\right|\mathcal{U}\right) $$ I V 1 : V 2 U , which is O(1/GN) when a private causal curve exists. The regions $$ {\mathcal{V}}_1 $$ V 1 and $$ {\mathcal{V}}_2 $$ V 2 are specified by the endpoints of the causal curve and the placement of the region $$ \mathcal{U} $$ U . This gives a causal perspective on the conditional mutual information in AdS/CFT, analogous to the causal perspective on the mutual information given by earlier work on the connected wedge theorem. We give an information theoretic argument for our theorem, along with a bulk geometric proof. In the geometric perspective, the theorem follows from the maximin formula and entanglement wedge nesting. In the information theoretic approach, the theorem follows from resource requirements for sending private messages over a public quantum channel.


1987 ◽  
Vol 19 (3) ◽  
pp. 385-394 ◽  
Author(s):  
J R Roy

In the use of information theory for the development of forecasting models, two alternative approaches can be used, based either on Shannon entropy or on Kullback information gain. In this paper, a new approach is presented, which combines the usually superior statistical inference powers of the Kullback procedure with the advantages of the availability of calibrated ‘elasticity’ parameters in the Shannon approach. Situations are discussed where the combined approach is preferable to either of the two existing procedures, and the principles are illustrated with the help of a small numerical example.


Sign in / Sign up

Export Citation Format

Share Document