scholarly journals Flexible Working Memory Through Selective Gating and Attentional Tagging

2021 ◽  
Vol 33 (1) ◽  
pp. 1-40 ◽  
Author(s):  
Wouter Kruijne ◽  
Sander M. Bohte ◽  
Pieter R. Roelfsema ◽  
Christian N. L. Olivers

Working memory is essential: it serves to guide intelligent behavior of humans and nonhuman primates when task-relevant stimuli are no longer present to the senses. Moreover, complex tasks often require that multiple working memory representations can be flexibly and independently maintained, prioritized, and updated according to changing task demands. Thus far, neural network models of working memory have been unable to offer an integrative account of how such control mechanisms can be acquired in a biologically plausible manner. Here, we present WorkMATe, a neural network architecture that models cognitive control over working memory content and learns the appropriate control operations needed to solve complex working memory tasks. Key components of the model include a gated memory circuit that is controlled by internal actions, encoding sensory information through untrained connections, and a neural circuit that matches sensory inputs to memory content. The network is trained by means of a biologically plausible reinforcement learning rule that relies on attentional feedback and reward prediction errors to guide synaptic updates. We demonstrate that the model successfully acquires policies to solve classical working memory tasks, such as delayed recognition and delayed pro-saccade/anti-saccade tasks. In addition, the model solves much more complex tasks, including the hierarchical 12-AX task or the ABAB ordered recognition task, both of which demand an agent to independently store and updated multiple items separately in memory. Furthermore, the control strategies that the model acquires for these tasks subsequently generalize to new task contexts with novel stimuli, thus bringing symbolic production rule qualities to a neural network architecture. As such, WorkMATe provides a new solution for the neural implementation of flexible memory control.

2019 ◽  
Author(s):  
Wouter Kruijne ◽  
Sander M. Bohte ◽  
Pieter R. Roelfsema ◽  
Christian N. L. Olivers

AbstractWorking memory is essential for intelligent behavior as it serves to guide behavior of humans and nonhuman primates when task-relevant stimuli are no longer present to the senses. Moreover, complex tasks often require that multiple working memory representations can be flexibly and independently maintained, prioritized, and updated according to changing task demands. Thus far, neural network models of working memory have been unable to offer an integrative account of how such control mechanisms are implemented in the brain and how they can be acquired in a biologically plausible manner. Here, we present WorkMATe, a neural network architecture that models cognitive control over working memory content and learns the appropriate control operations needed to solve complex working memory tasks. Key components of the model include a gated memory circuit that is controlled by internal actions, encoding sensory information through untrained connections, and a neural circuit that matches sensory inputs to memory content. The network is trained by means of a biologically plausible reinforcement learning rule that relies on attentional feedback and reward prediction errors to guide synaptic updates. We demonstrate that the model successfully acquires policies to solve classical working memory tasks, such as delayed match-to-sample and delayed pro-saccade/antisaccade tasks. In addition, the model solves much more complex tasks including the hierarchical 12-AX task or the ABAB ordered recognition task, which both demand an agent to independently store and updated multiple items separately in memory. Furthermore, the control strategies that the model acquires for these tasks subsequently generalize to new task contexts with novel stimuli. As such, WorkMATe provides a new solution for the neural implementation of flexible memory control.Author SummaryWorking Memory, the ability to briefly store sensory information and use it to guide behavior, is a cornerstone of intelligent behavior. Existing neural network models of Working Memory typically focus on how information is stored and maintained in the brain, but do not address how memory content is controlled: how the brain can selectively store only stimuli that are relevant for a task, or how different stimuli can be maintained in parallel, and subsequently replaced or updated independently according to task demands. The models that do implement control mechanisms are typically not trained in a biologically plausible manner, and do not explain how the brain learns such control. Here, we present WorkMATe, a neural network architecture that implements flexible cognitive control and learns to apply these control mechanisms using a biologically plausible reinforcement learning method. We demonstrate that the model acquires control policies to solve a range of both simple and more complex tasks. Moreover, the acquired control policies generalize to new situations, as with human cognition. This way, WorkMATe provides new insights into the neural organization of Working Memory beyond mere storage and retrieval.


1991 ◽  
Vol 3 (3) ◽  
pp. 375-385 ◽  
Author(s):  
A. D. Back ◽  
A. C. Tsoi

A new neural network architecture involving either local feedforward global feedforward, and/or local recurrent global feedforward structure is proposed. A learning rule minimizing a mean square error criterion is derived. The performance of this algorithm (local recurrent global feedforward architecture) is compared with a local-feedforward global-feedforward architecture. It is shown that the local-recurrent global-feedforward model performs better than the local-feedforward global-feedforward model.


2020 ◽  
Vol 28 (5-8) ◽  
pp. 356-371 ◽  
Author(s):  
Andrea Bocincova ◽  
Christian N. L. Olivers ◽  
Mark G. Stokes ◽  
Sanjay G. Manohar

2020 ◽  
Vol 2020 (10) ◽  
pp. 54-62
Author(s):  
Oleksii VASYLIEV ◽  

The problem of applying neural networks to calculate ratings used in banking in the decision-making process on granting or not granting loans to borrowers is considered. The task is to determine the rating function of the borrower based on a set of statistical data on the effectiveness of loans provided by the bank. When constructing a regression model to calculate the rating function, it is necessary to know its general form. If so, the task is to calculate the parameters that are included in the expression for the rating function. In contrast to this approach, in the case of using neural networks, there is no need to specify the general form for the rating function. Instead, certain neural network architecture is chosen and parameters are calculated for it on the basis of statistical data. Importantly, the same neural network architecture can be used to process different sets of statistical data. The disadvantages of using neural networks include the need to calculate a large number of parameters. There is also no universal algorithm that would determine the optimal neural network architecture. As an example of the use of neural networks to determine the borrower's rating, a model system is considered, in which the borrower's rating is determined by a known non-analytical rating function. A neural network with two inner layers, which contain, respectively, three and two neurons and have a sigmoid activation function, is used for modeling. It is shown that the use of the neural network allows restoring the borrower's rating function with quite acceptable accuracy.


Sign in / Sign up

Export Citation Format

Share Document