Nonextensive Variational Principles for Rate Distortion Theory and the Information Bottleneck Method

Author(s):  
R. C. Venkatesan
Author(s):  
David Abel ◽  
Dilip Arumugam ◽  
Kavosh Asadi ◽  
Yuu Jinnai ◽  
Michael L. Littman ◽  
...  

State abstraction can give rise to models of environments that are both compressed and useful, thereby enabling efficient sequential decision making. In this work, we offer the first formalism and analysis of the trade-off between compression and performance made in the context of state abstraction for Apprenticeship Learning. We build on Rate-Distortion theory, the classic Blahut-Arimoto algorithm, and the Information Bottleneck method to develop an algorithm for computing state abstractions that approximate the optimal tradeoff between compression and performance. We illustrate the power of this algorithmic structure to offer insights into effective abstraction, compression, and reinforcement learning through a mixture of analysis, visuals, and experimentation.


Entropy ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. 976 ◽  
Author(s):  
Thanh Tang Nguyen ◽  
Jaesik Choi

While rate distortion theory compresses data under a distortion constraint, information bottleneck (IB) generalizes rate distortion theory to learning problems by replacing a distortion constraint with a constraint of relevant information. In this work, we further extend IB to multiple Markov bottlenecks (i.e., latent variables that form a Markov chain), namely Markov information bottleneck (MIB), which particularly fits better in the context of stochastic neural networks (SNNs) than the original IB. We show that Markov bottlenecks cannot simultaneously achieve their information optimality in a non-collapse MIB, and thus devise an optimality compromise. With MIB, we take the novel perspective that each layer of an SNN is a bottleneck whose learning goal is to encode relevant information in a compressed form from the data. The inference from a hidden layer to the output layer is then interpreted as a variational approximation to the layer’s decoding of relevant information in the MIB. As a consequence of this perspective, the maximum likelihood estimate (MLE) principle in the context of SNNs becomes a special case of the variational MIB. We show that, compared to MLE, the variational MIB can encourage better information flow in SNNs in both principle and practice, and empirically improve performance in classification, adversarial robustness, and multi-modal learning in MNIST.


2021 ◽  
pp. 30-35
Author(s):  
Vadim Gribunin ◽  
◽  
Andrey Timonov ◽  

Purpose of the article: optimization of the choice of information security tools in a multi-level automated system, taking into account higher levels, quality indicators of information security tools, as well as the general financial budget. Demonstration of analogies of solving these problems with known problems from communication theory. Research method: optimal choice of information security tools based on risk analysis and the Lagrange multiplier method; Optimal bit budget allocation based on the Waterfilling optimization algorithm. Optimal placement of information security tools in a multilevel automated system based on bisectional search. Obtained result: the article shows analogies between some problems of communication theory and the optimal choice of information security tools. The well-known problem of the optimal choice of information security tools is solved using the rate-distortion theory, the well-known problem of the optimal budget allocation for their purchase is solved by analogy with the problem of distributing the power of transmitters. For the first time, the problem posed for the optimal placement of information security tools in a multilevel automated system was solved by analogy with the problem of distributing the total bit budget between quantizers.


Sign in / Sign up

Export Citation Format

Share Document