scholarly journals A Separation Principle for Control in the Age of Deep Learning

Author(s):  
Alessandro Achille ◽  
Stefano Soatto

We review the problem of defining and inferring a state for a control system based on complex, high-dimensional, highly uncertain measurement streams, such as videos. Such a state, or representation, should contain all and only the information needed for control and discount nuisance variability in the data. It should also have finite complexity, ideally modulated depending on available resources. This representation is what we want to store in memory in lieu of the data, as it separates the control task from the measurement process. For the trivial case with no dynamics, a representation can be inferred by minimizing the information bottleneck Lagrangian in a function class realized by deep neural networks. The resulting representation has much higher dimension than the data (already in the millions) but is smaller in the sense of information content, retaining only what is needed for the task. This process also yields representations that are invariant to nuisance factors and have maximally independent components. We extend these ideas to the dynamic case, where the representation is the posterior density of the task variable given the measurements up to the current time, which is in general much simpler than the prediction density maintained by the classical Bayesian filter. Again, this can be finitely parameterized using a deep neural network, and some applications are already beginning to emerge. No explicit assumption of Markovianity is needed; instead, complexity trades off approximation of an optimal representation, including the degree of Markovianity.

2009 ◽  
Vol 21 (4) ◽  
pp. 911-959 ◽  
Author(s):  
Stefan Klampfl ◽  
Robert Legenstein ◽  
Wolfgang Maass

Independent component analysis (or blind source separation) is assumed to be an essential component of sensory processing in the brain and could provide a less redundant representation about the external world. Another powerful processing strategy is the optimization of internal representations according to the information bottleneck method. This method would allow extracting preferentially those components from high-dimensional sensory input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. However, there exists a lack of models that could explain how spiking neurons could learn to execute either of these two processing strategies. We show in this article how stochastically spiking neurons with refractoriness could in principle learn in an unsupervised manner to carry out both information bottleneck optimization and the extraction of independent components. We derive suitable learning rules, which extend the well-known BCM rule, from abstract information optimization principles. These rules will simultaneously keep the firing rate of the neuron within a biologically realistic range.


2005 ◽  
Author(s):  
John W. Ruffner ◽  
Kaleb McDowell ◽  
Victor J. Paul ◽  
Harry J. Zywiol ◽  
Todd T. Mortsfield ◽  
...  

2008 ◽  
Author(s):  
Helena Broberg ◽  
Michael Hildebrandt ◽  
Salvatore Massaiu ◽  
Per Oivind Braarud

2011 ◽  
Author(s):  
Daniel Gartenberg ◽  
Malcolm McCurry ◽  
Greg Trafton

2011 ◽  
Author(s):  
Yukio Horiguchi ◽  
Keisuke Yasuda ◽  
Hiroaki Nakanishi ◽  
Tetsuo Sawaragi
Keyword(s):  

2011 ◽  
Author(s):  
Maryam Ashoori ◽  
Catherine Burns ◽  
Kathryn Momtahan ◽  
Barbara d'Entremont

Sign in / Sign up

Export Citation Format

Share Document