scholarly journals Directed Data-Processing Inequalities for Systems with Feedback

Entropy ◽  
2021 ◽  
Vol 23 (5) ◽  
pp. 533
Author(s):  
Milan S. Derpich ◽  
Jan Østergaard

We present novel data-processing inequalities relating the mutual information and the directed information in systems with feedback. The internal deterministic blocks within such systems are restricted only to be causal mappings, but are allowed to be non-linear and time varying, and randomized by their own external random input, can yield any stochastic mapping. These randomized blocks can for example represent source encoders, decoders, or even communication channels. Moreover, the involved signals can be arbitrarily distributed. Our first main result relates mutual and directed information and can be interpreted as a law of conservation of information flow. Our second main result is a pair of data-processing inequalities (one the conditional version of the other) between nested pairs of random sequences entirely within the closed loop. Our third main result introduces and characterizes the notion of in-the-loop (ITL) transmission rate for channel coding scenarios in which the messages are internal to the loop. Interestingly, in this case the conventional notions of transmission rate associated with the entropy of the messages and of channel capacity based on maximizing the mutual information between the messages and the output turn out to be inadequate. Instead, as we show, the ITL transmission rate is the unique notion of rate for which a channel code attains zero error probability if and only if such an ITL rate does not exceed the corresponding directed information rate from messages to decoded messages. We apply our data-processing inequalities to show that the supremum of achievable (in the usual channel coding sense) ITL transmission rates is upper bounded by the supremum of the directed information rate across the communication channel. Moreover, we present an example in which this upper bound is attained. Finally, we further illustrate the applicability of our results by discussing how they make possible the generalization of two fundamental inequalities known in networked control literature.

Author(s):  
D. Van Dyck

An (electron) microscope can be considered as a communication channel that transfers structural information between an object and an observer. In electron microscopy this information is carried by electrons. According to the theory of Shannon the maximal information rate (or capacity) of a communication channel is given by C = B log2 (1 + S/N) bits/sec., where B is the band width, and S and N the average signal power, respectively noise power at the output. We will now apply to study the information transfer in an electron microscope. For simplicity we will assume the object and the image to be onedimensional (the results can straightforwardly be generalized). An imaging device can be characterized by its transfer function, which describes the magnitude with which a spatial frequency g is transferred through the device, n is the noise. Usually, the resolution of the instrument ᑭ is defined from the cut-off 1/ᑭ beyond which no spadal information is transferred.


1978 ◽  
Vol 17 (01) ◽  
pp. 36-40 ◽  
Author(s):  
J.-P. Durbec ◽  
Jaqueline Cornée ◽  
P. Berthezene

The practice of systematic examinations in hospitals and the increasing development of automatic data processing permits the storing of a great deal of information about a large number of patients belonging to different diagnosis groups.To predict or to characterize these diagnosis groups some descriptors are particularly useful, others carry no information. Data screening based on the properties of mutual information and on the log cross products ratios in contingency tables is developed. The most useful descriptors are selected. For each one the characterized groups are specified.This approach has been performed on a set of binary (presence—absence) radiological variables. Four diagnoses groups are concerned: cancer of pancreas, chronic calcifying pancreatitis, non-calcifying pancreatitis and probable pancreatitis. Only twenty of the three hundred and forty initial radiological variables are selected. The presence of each corresponding sign is associated with one or more diagnosis groups.


Energies ◽  
2019 ◽  
Vol 12 (18) ◽  
pp. 3429 ◽  
Author(s):  
Chu ◽  
Yuan ◽  
Hu ◽  
Pan ◽  
Pan

With increasing size and flexibility of modern grid-connected wind turbines, advanced control algorithms are urgently needed, especially for multi-degree-of-freedom control of blade pitches and sizable rotor. However, complex dynamics of wind turbines are difficult to be modeled in a simplified state-space form for advanced control design considering stability. In this paper, grey-box parameter identification of critical mechanical models is systematically studied without excitation experiment, and applicabilities of different methods are compared from views of control design. Firstly, through mechanism analysis, the Hammerstein structure is adopted for mechanical-side modeling of wind turbines. Under closed-loop control across the whole wind speed range, structural identifiability of the drive-train model is analyzed in qualitation. Then, mutual information calculation among identified variables is used to quantitatively reveal the relationship between identification accuracy and variables’ relevance. Then, the methods such as subspace identification, recursive least square identification and optimal identification are compared for a two-mass model and tower model. At last, through the high-fidelity simulation demo of a 2 MW wind turbine in the GH Bladed software, multivariable datasets are produced for studying. The results show that the Hammerstein structure is effective for simplify the modeling process where closed-loop identification of a two-mass model without excitation experiment is feasible. Meanwhile, it is found that variables’ relevance has obvious influence on identification accuracy where mutual information is a good indicator. Higher mutual information often yields better accuracy. Additionally, three identification methods have diverse performance levels, showing their application potentials for different control design algorithms. In contrast, grey-box optimal parameter identification is the most promising for advanced control design considering stability, although its simplified representation of complex mechanical dynamics needs additional dynamic compensation which will be studied in future.


2019 ◽  
pp. 18-28
Author(s):  
Boris Andrievsky ◽  
Yury Orlov

The paper is devoted to the numerical performance evaluation of the speed-gradient algorithms, recently developed in (Orlov et al., 2018; Orlov et al., 2019) for controlling the energy of the sine-Gordon spatially distributed systems with several in-domain actuators. The influence of the level quantization of the state feedback control signal (possibly coupled to the time sampling) on the steady-state energy error and the closed loop system stability is investigated in the simulation study. The following types of quantization are taken into account: sampling-in-time control signal quantization, the level quantization for control, continuous in time; control signal quantization on level jointly with time sampling; control signal transmission over the binary communication channel with time-invariant first order coder; control signal transmission over the binary communication channel with first order coder and time-based zooming; control signal transmission over the binary communication channel with adaptive first order coder. A resulting impact on the closed-loop performance in question is concluded for each type of the quantization involved.


2015 ◽  
Vol 113 (5) ◽  
pp. 1342-1357 ◽  
Author(s):  
Davide Bernardi ◽  
Benjamin Lindner

The encoding and processing of time-dependent signals into sequences of action potentials of sensory neurons is still a challenging theoretical problem. Although, with some effort, it is possible to quantify the flow of information in the model-free framework of Shannon's information theory, this yields just a single number, the mutual information rate. This rate does not indicate which aspects of the stimulus are encoded. Several studies have identified mechanisms at the cellular and network level leading to low- or high-pass filtering of information, i.e., the selective coding of slow or fast stimulus components. However, these findings rely on an approximation, specifically, on the qualitative behavior of the coherence function, an approximate frequency-resolved measure of information flow, whose quality is generally unknown. Here, we develop an assumption-free method to measure a frequency-resolved information rate about a time-dependent Gaussian stimulus. We demonstrate its application for three paradigmatic descriptions of neural firing: an inhomogeneous Poisson process that carries a signal in its instantaneous firing rate; an integrator neuron (stochastic integrate-and-fire model) driven by a time-dependent stimulus; and the synchronous spikes fired by two commonly driven integrator neurons. In agreement with previous coherence-based estimates, we find that Poisson and integrate-and-fire neurons are broadband and low-pass filters of information, respectively. The band-pass information filtering observed in the coherence of synchronous spikes is confirmed by our frequency-resolved information measure in some but not all parameter configurations. Our results also explicitly show how the response-response coherence can fail as an upper bound on the information rate.


2017 ◽  
Vol 825 ◽  
pp. 704-742 ◽  
Author(s):  
Jose M. Pozo ◽  
Arjan J. Geers ◽  
Maria-Cruz Villa-Uriol ◽  
Alejandro F. Frangi

Flow complexity is related to a number of phenomena in science and engineering and has been approached from the perspective of chaotic dynamical systems, ergodic processes or mixing of fluids, just to name a few. To the best of our knowledge, all existing methods to quantify flow complexity are only valid for infinite time evolution, for closed systems or for mixing of two substances. We introduce an index of flow complexity coined interlacing complexity index (ICI), valid for a single-phase flow in an open system with inlet and outlet regions, involving finite times. ICI is based on Shannon’s mutual information (MI), and inspired by an analogy between inlet–outlet open flow systems and communication systems in communication theory. The roles of transmitter, receiver and communication channel are played, respectively, by the inlet, the outlet and the flow transport between them. A perfectly laminar flow in a straight tube can be compared to an ideal communication channel where the transmitted and received messages are identical and hence the MI between input and output is maximal. For more complex flows, generated by more intricate conditions or geometries, the ability to discriminate the outlet position by knowing the inlet position is decreased, reducing the corresponding MI. The behaviour of the ICI has been tested with numerical experiments on diverse flows cases. The results indicate that the ICI provides a sensitive complexity measure with intuitive interpretation in a diversity of conditions and in agreement with other observations, such as Dean vortices and subjective visual assessments. As a crucial component of the ICI formulation, we also introduce the natural distribution of streamlines and the natural distribution of world-lines, with invariance properties with respect to the cross-section used to parameterize them, valid for any type of mass-preserving flow.


Entropy ◽  
2019 ◽  
Vol 21 (8) ◽  
pp. 778 ◽  
Author(s):  
Amos Lapidoth ◽  
Christoph Pfister

Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon’s mutual information when their order α is one. The first measure shares many properties with the mutual information, including the data-processing inequality, and can be related to the optimal error exponents in composite hypothesis testing. The second measure does not satisfy the data-processing inequality, but appears naturally in the context of distributed task encoding.


Sign in / Sign up

Export Citation Format

Share Document