Adaptive Optimal Control Without Weight Transport

2012 ◽  
Vol 24 (6) ◽  
pp. 1487-1518 ◽  
Author(s):  
Lakshminarayan V. Chinta ◽  
Douglas B. Tweed

Many neural control systems are at least roughly optimized, but how is optimal control learned? There are algorithms for this purpose, but in their current forms, they are not suited for biological neural networks because they rely on a type of communication that is not available in the brain, namely, weight transport—transmitting the strengths, or “weights,” of individual synapses to other synapses and neurons. Here we show how optimal control can be learned without weight transport. Our method involves a set of simple mechanisms that can compensate for the absence of weight transport in the brain and so may be useful for neural computation generally.

2021 ◽  
Vol 14 ◽  
Author(s):  
Hyojin Bae ◽  
Sang Jeong Kim ◽  
Chang-Eop Kim

One of the central goals in systems neuroscience is to understand how information is encoded in the brain, and the standard approach is to identify the relation between a stimulus and a neural response. However, the feature of a stimulus is typically defined by the researcher's hypothesis, which may cause biases in the research conclusion. To demonstrate potential biases, we simulate four likely scenarios using deep neural networks trained on the image classification dataset CIFAR-10 and demonstrate the possibility of selecting suboptimal/irrelevant features or overestimating the network feature representation/noise correlation. Additionally, we present studies investigating neural coding principles in biological neural networks to which our points can be applied. This study aims to not only highlight the importance of careful assumptions and interpretations regarding the neural response to stimulus features but also suggest that the comparative study between deep and biological neural networks from the perspective of machine learning can be an effective strategy for understanding the coding principles of the brain.


1997 ◽  
Vol 20 (1) ◽  
pp. 80-80
Author(s):  
Paul Skokowski

Biological neural computation relies a great deal on architecture, which constrains the types of content that can be processed by distinct modules in the brain. Though artificial neural networks are useful tools and give insight, they cannot be relied upon yet to give definitive answers to problems in cognition. Knowledge re-use may be driven more by architectural inheritance than by epistemological drives.


2013 ◽  
Vol 25 (2) ◽  
pp. 374-417 ◽  
Author(s):  
Manuel Lagang ◽  
Lakshminarayan Srinivasan

The closed-loop operation of brain-machine interfaces (BMI) provides a framework to study the mechanisms behind neural control through a restricted output channel, with emerging clinical applications to stroke, degenerative disease, and trauma. Despite significant empirically driven improvements in closed-loop BMI systems, a fundamental, experimentally validated theory of closed-loop BMI operation is lacking. Here we propose a compact model based on stochastic optimal control to describe the brain in skillfully operating canonical decoding algorithms. The model produces goal-directed BMI movements with sensory feedback and intrinsically noisy neural output signals. Various experimentally validated phenomena emerge naturally from this model, including performance deterioration with bin width, compensation of biased decoders, and shifts in tuning curves between arm control and BMI control. Analysis of the model provides insight into possible mechanisms underlying these behaviors, with testable predictions. Spike binning may erode performance in part from intrinsic control-dependent constraints, regardless of decoding accuracy. In compensating decoder bias, the brain may incur an energetic cost associated with action potential production. Tuning curve shifts, seen after the mastery of a BMI-based skill, may reflect the brain's implementation of a new closed-loop control policy. The direction and magnitude of tuning curve shifts may be altered by decoder structure, ensemble size, and the costs of closed-loop control. Looking forward, the model provides a framework for the design and simulated testing of an emerging class of BMI algorithms that seek to directly exploit the presence of a human in the loop.


2021 ◽  
Author(s):  
Daniel B. Ehrlich ◽  
John D. Murray

Real-world tasks require coordination of working memory, decision making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. Our experiments revealed that human behavior is consistent with contingency representations, and not with traditional sensory models of working memory. In task-optimized recurrent neural networks we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from prefrontal cortex during working memory tasks. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision making.


2021 ◽  
Author(s):  
Quan Wan ◽  
Jorge A. Menendez ◽  
Bradley R. Postle

How does the brain prioritize among the contents of working memory to appropriately guide behavior? Using inverted encoding modeling (IEM), previous work (Wan et al., 2020) showed that unprioritized memory items (UMI) are actively represented in the brain but in a “flipped”, or opposite, format compared to prioritized memory items (PMI). To gain insight into the mechanisms underlying the UMI-to-PMI representational transformation, we trained recurrent neural networks (RNNs) with an LSTM architecture to perform a 2-back working memory task. Visualization of the LSTM hidden layer activity using Principle Component Analysis (PCA) revealed that the UMI representation is rotationally remapped to that of PMI, and this was quantified and confirmed via demixed PCA. The application of the same analyses to the EEG dataset of Wan et al. (2020) revealed similar rotational remapping between the UMI and PMI representations. These results identify rotational remapping as a candidate neural computation employed in the dynamic prioritization within contents of working memory.


2021 ◽  
pp. 1-25
Author(s):  
Yang Shen ◽  
Julia Wang ◽  
Saket Navlakha

Abstract A fundamental challenge at the interface of machine learning and neuroscience is to uncover computational principles that are shared between artificial and biological neural networks. In deep learning, normalization methods such as batch normalization, weight normalization, and their many variants help to stabilize hidden unit activity and accelerate network training, and these methods have been called one of the most important recent innovations for optimizing deep networks. In the brain, homeostatic plasticity represents a set of mechanisms that also stabilize and normalize network activity to lie within certain ranges, and these mechanisms are critical for maintaining normal brain function. In this article, we discuss parallels between artificial and biological normalization methods at four spatial scales: normalization of a single neuron's activity, normalization of synaptic weights of a neuron, normalization of a layer of neurons, and normalization of a network of neurons. We argue that both types of methods are functionally equivalent—that is, both push activation patterns of hidden units toward a homeostatic state, where all neurons are equally used—and we argue that such representations can improve coding capacity, discrimination, and regularization. As a proof of concept, we develop an algorithm, inspired by a neural normalization technique called synaptic scaling, and show that this algorithm performs competitively against existing normalization methods on several data sets. Overall, we hope this bidirectional connection will inspire neuroscientists and machine learners in three ways: to uncover new normalization algorithms based on established neurobiological principles; to help quantify the trade-offs of different homeostatic plasticity mechanisms used in the brain; and to offer insights about how stability may not hinder, but may actually promote, plasticity.


1993 ◽  
Vol 115 (2B) ◽  
pp. 392-401 ◽  
Author(s):  
Rahmat Shoureshi

The fundamental concept of feedback to control dynamic systems has played a major role in many areas of engineering. Increases in complexity and more stringent requirements have introduced new challenges for control systems. This paper presents an introduction to and appreciation for intelligent control systems, their application areas, and justifies their need. Specific problem related to automated human comfort control is discussed. Some analytical derivations related to neural networks and fuzzy optimal control as elements of proposed intelligent control systems, along with experimental results, are presented. A brief glossary of common terminology used in this area is included.


Sign in / Sign up

Export Citation Format

Share Document