scholarly journals Engineering recurrent neural networks from task-relevant manifolds and dynamics

2019 ◽  
Author(s):  
Eli Pollock ◽  
Mehrdad Jazayeri

AbstractMany cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.Author SummaryNeurons in the brain form intricate networks that can produce a vast array of activity patterns. To support goal-directed behavior, the brain must adjust the connections between neurons so that network dynamics can perform desirable computations on behaviorally relevant variables. A fundamental goal in computational neuroscience is to provide an understanding of how network connectivity aligns the dynamics in the brain to the dynamics needed to track those variables. Here, we develop a mathematical framework for creating recurrent neural network models that can address this problem. Specifically, we derive a set of linear equations that constrain the connectivity to afford a direct mapping of task-relevant dynamics onto network activity. We demonstrate the utility of this technique by creating and analyzing a set of network models that can perform a simple working memory task. We then extend the approach to show how additional constraints can furnish networks whose dynamics are controlled flexibly by external inputs. Finally, we exploit the flexibility of this technique to explore the robustness and capacity limitations of recurrent networks. This network synthesis method provides a powerful means for generating and validating hypotheses about how task-relevant computations can emerge from network dynamics.

2014 ◽  
Vol 538 ◽  
pp. 167-170
Author(s):  
Hui Zhong Mao ◽  
Chen Qiao ◽  
Wen Feng Jing ◽  
Xi Chen ◽  
Jin Qin Mao

This paper presents the global convergence theory of the discrete-time uniform pseudo projection anti-monotone network with the quasi–symmetric matrix, which removes the connection matrix constraints. The theory widens the range of applications of the discrete–time uniform pseudo projection anti–monotone network and is valid for many kinds of discrete recurrent neural network models.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Soroosh Shahtalebi ◽  
Seyed Farokh Atashzar ◽  
Olivia Samotus ◽  
Rajni V. Patel ◽  
Mandar S. Jog ◽  
...  

2002 ◽  
Vol 14 (6) ◽  
pp. 557-564 ◽  
Author(s):  
Wenwei Yu ◽  
◽  
Daisuke Nishikawa ◽  
Yasuhiro Ishikawa ◽  
Hiroshi Yokoi ◽  
...  

The purpose of this research was to develop a tendondriven electrical prosthetic hand, which is characterized by its mechanical torque-velocity converter and a mechanism that can assist proximal joint torque by distal actuators. To cope with time-delay and nonlinear properties of the prosthetic hand, a controller based on a Jordan network, recurrent neural network models, is proposed. The results of experiments on the stability of the controller are confirmed when tracking static wire tensions. Finally, the next prototype of prosthetic hand based on these methods is introduced.


Sign in / Sign up

Export Citation Format

Share Document