scholarly journals Mutual information rate between stationary Gaussian processes

2020 ◽  
Vol 7 ◽  
pp. 100107
Author(s):  
Arash Komaee
Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 533
Author(s):  
Milan S. Derpich ◽  
Jan Østergaard

We present novel data-processing inequalities relating the mutual information and the directed information in systems with feedback. The internal deterministic blocks within such systems are restricted only to be causal mappings, but are allowed to be non-linear and time varying, and randomized by their own external random input, can yield any stochastic mapping. These randomized blocks can for example represent source encoders, decoders, or even communication channels. Moreover, the involved signals can be arbitrarily distributed. Our first main result relates mutual and directed information and can be interpreted as a law of conservation of information flow. Our second main result is a pair of data-processing inequalities (one the conditional version of the other) between nested pairs of random sequences entirely within the closed loop. Our third main result introduces and characterizes the notion of in-the-loop (ITL) transmission rate for channel coding scenarios in which the messages are internal to the loop. Interestingly, in this case the conventional notions of transmission rate associated with the entropy of the messages and of channel capacity based on maximizing the mutual information between the messages and the output turn out to be inadequate. Instead, as we show, the ITL transmission rate is the unique notion of rate for which a channel code attains zero error probability if and only if such an ITL rate does not exceed the corresponding directed information rate from messages to decoded messages. We apply our data-processing inequalities to show that the supremum of achievable (in the usual channel coding sense) ITL transmission rates is upper bounded by the supremum of the directed information rate across the communication channel. Moreover, we present an example in which this upper bound is attained. Finally, we further illustrate the applicability of our results by discussing how they make possible the generalization of two fundamental inequalities known in networked control literature.


2015 ◽  
Vol 113 (5) ◽  
pp. 1342-1357 ◽  
Author(s):  
Davide Bernardi ◽  
Benjamin Lindner

The encoding and processing of time-dependent signals into sequences of action potentials of sensory neurons is still a challenging theoretical problem. Although, with some effort, it is possible to quantify the flow of information in the model-free framework of Shannon's information theory, this yields just a single number, the mutual information rate. This rate does not indicate which aspects of the stimulus are encoded. Several studies have identified mechanisms at the cellular and network level leading to low- or high-pass filtering of information, i.e., the selective coding of slow or fast stimulus components. However, these findings rely on an approximation, specifically, on the qualitative behavior of the coherence function, an approximate frequency-resolved measure of information flow, whose quality is generally unknown. Here, we develop an assumption-free method to measure a frequency-resolved information rate about a time-dependent Gaussian stimulus. We demonstrate its application for three paradigmatic descriptions of neural firing: an inhomogeneous Poisson process that carries a signal in its instantaneous firing rate; an integrator neuron (stochastic integrate-and-fire model) driven by a time-dependent stimulus; and the synchronous spikes fired by two commonly driven integrator neurons. In agreement with previous coherence-based estimates, we find that Poisson and integrate-and-fire neurons are broadband and low-pass filters of information, respectively. The band-pass information filtering observed in the coherence of synchronous spikes is confirmed by our frequency-resolved information measure in some but not all parameter configurations. Our results also explicitly show how the response-response coherence can fail as an upper bound on the information rate.


Sign in / Sign up

Export Citation Format

Share Document