scholarly journals Sequence structure organizes items in varied latent states of working memory neural network

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Qiaoli Huang ◽  
Huihui Zhang ◽  
Huan Luo

In memory experiences, events do not exist independently but are linked with each other via structure-based organization. Structure context largely influences memory behavior, but how it is implemented in the brain remains unknown. Here, we combined magnetoencephalogram (MEG) recordings, computational modeling, and impulse-response approaches to probe the latent states when subjects held a list of items in working memory (WM). We demonstrate that sequence context reorganizes WM items into distinct latent states, i.e., being reactivated at different latencies during WM retention, and the reactivation profiles further correlate with recency behavior. In contrast, memorizing the same list of items without sequence task requirements weakens the recency effect and elicits comparable neural reactivations. Computational modeling further reveals a dominant function of sequence context, instead of passive memory decaying, in characterizing recency effect. Taken together, sequence structure context shapes the way WM items are stored in the human brain and essentially influences memory behavior.

2020 ◽  
Author(s):  
Qiaoli Huang ◽  
Huihui Zhang ◽  
Huan Luo

AbstractIn memory experiences, events do not exist independently but are linked with each other via structure-based organization. Structure knowledge largely influences memory behavior, but how it is implemented in the brain remains unknown. Here, we combined magnetoencephalogram (MEG) recordings, computational modeling, and impulse-response approaches to probe the latent states when subjects held a list of items in working memory (WM). We demonstrate that sequence structure reorganizes WM items into distinct latent states, i.e., being reactivated at different latencies, and the reactivation profiles further correlate with recency behavior. In contrast, memorizing the same list of items without sequence requirements disrupts the recency effect and elicits comparable reactivations. Finally, computational modeling reveals a dominant function of high-level representations that characterize the abstract sequence structure, instead of low-level information decaying, in mediating sequence memory. Taken together, sequence structure shapes the way WM items are stored in the brain and essentially influences memory behavior.


2019 ◽  
Author(s):  
Wouter Kruijne ◽  
Sander M. Bohte ◽  
Pieter R. Roelfsema ◽  
Christian N. L. Olivers

AbstractWorking memory is essential for intelligent behavior as it serves to guide behavior of humans and nonhuman primates when task-relevant stimuli are no longer present to the senses. Moreover, complex tasks often require that multiple working memory representations can be flexibly and independently maintained, prioritized, and updated according to changing task demands. Thus far, neural network models of working memory have been unable to offer an integrative account of how such control mechanisms are implemented in the brain and how they can be acquired in a biologically plausible manner. Here, we present WorkMATe, a neural network architecture that models cognitive control over working memory content and learns the appropriate control operations needed to solve complex working memory tasks. Key components of the model include a gated memory circuit that is controlled by internal actions, encoding sensory information through untrained connections, and a neural circuit that matches sensory inputs to memory content. The network is trained by means of a biologically plausible reinforcement learning rule that relies on attentional feedback and reward prediction errors to guide synaptic updates. We demonstrate that the model successfully acquires policies to solve classical working memory tasks, such as delayed match-to-sample and delayed pro-saccade/antisaccade tasks. In addition, the model solves much more complex tasks including the hierarchical 12-AX task or the ABAB ordered recognition task, which both demand an agent to independently store and updated multiple items separately in memory. Furthermore, the control strategies that the model acquires for these tasks subsequently generalize to new task contexts with novel stimuli. As such, WorkMATe provides a new solution for the neural implementation of flexible memory control.Author SummaryWorking Memory, the ability to briefly store sensory information and use it to guide behavior, is a cornerstone of intelligent behavior. Existing neural network models of Working Memory typically focus on how information is stored and maintained in the brain, but do not address how memory content is controlled: how the brain can selectively store only stimuli that are relevant for a task, or how different stimuli can be maintained in parallel, and subsequently replaced or updated independently according to task demands. The models that do implement control mechanisms are typically not trained in a biologically plausible manner, and do not explain how the brain learns such control. Here, we present WorkMATe, a neural network architecture that implements flexible cognitive control and learns to apply these control mechanisms using a biologically plausible reinforcement learning method. We demonstrate that the model acquires control policies to solve a range of both simple and more complex tasks. Moreover, the acquired control policies generalize to new situations, as with human cognition. This way, WorkMATe provides new insights into the neural organization of Working Memory beyond mere storage and retrieval.


2019 ◽  
Author(s):  
Eli Pollock ◽  
Mehrdad Jazayeri

AbstractMany cognitive processes involve transformations of distributed representations in neural populations, creating a need for population-level models. Recurrent neural network models fulfill this need, but there are many open questions about how their connectivity gives rise to dynamics that solve a task. Here, we present a method for finding the connectivity of networks for which the dynamics are specified to solve a task in an interpretable way. We apply our method to a working memory task by synthesizing a network that implements a drift-diffusion process over a ring-shaped manifold. We also use our method to demonstrate how inputs can be used to control network dynamics for cognitive flexibility and explore the relationship between representation geometry and network capacity. Our work fits within the broader context of understanding neural computations as dynamics over relatively low-dimensional manifolds formed by correlated patterns of neurons.Author SummaryNeurons in the brain form intricate networks that can produce a vast array of activity patterns. To support goal-directed behavior, the brain must adjust the connections between neurons so that network dynamics can perform desirable computations on behaviorally relevant variables. A fundamental goal in computational neuroscience is to provide an understanding of how network connectivity aligns the dynamics in the brain to the dynamics needed to track those variables. Here, we develop a mathematical framework for creating recurrent neural network models that can address this problem. Specifically, we derive a set of linear equations that constrain the connectivity to afford a direct mapping of task-relevant dynamics onto network activity. We demonstrate the utility of this technique by creating and analyzing a set of network models that can perform a simple working memory task. We then extend the approach to show how additional constraints can furnish networks whose dynamics are controlled flexibly by external inputs. Finally, we exploit the flexibility of this technique to explore the robustness and capacity limitations of recurrent networks. This network synthesis method provides a powerful means for generating and validating hypotheses about how task-relevant computations can emerge from network dynamics.


1998 ◽  
Vol 21 (6) ◽  
pp. 833-833 ◽  
Author(s):  
Roman Borisyuk ◽  
Galina Borisyuk ◽  
Yakov Kazanovich

Synchronization of neural activity in oscillatory neural networks is a general principle of information processing in the brain at both preattentional and attentional levels. This is confirmed by a model of attention based on an oscillatory neural network with a central element and models of feature binding and working memory based on multi-frequency oscillations.


2010 ◽  
Vol 61 (2) ◽  
pp. 120-124 ◽  
Author(s):  
Ladislav Zjavka

Generalization of Patterns by Identification with Polynomial Neural Network Artificial neural networks (ANN) in general classify patterns according to their relationship, they are responding to related patterns with a similar output. Polynomial neural networks (PNN) are capable of organizing themselves in response to some features (relations) of the data. Polynomial neural network for dependence of variables identification (D-PNN) describes a functional dependence of input variables (not entire patterns). It approximates a hyper-surface of this function with multi-parametric particular polynomials forming its functional output as a generalization of input patterns. This new type of neural network is based on GMDH polynomial neural network and was designed by author. D-PNN operates in a way closer to the brain learning as the ANN does. The ANN is in principle a simplified form of the PNN, where the combinations of input variables are missing.


2015 ◽  
Vol 113 (9) ◽  
pp. 3159-3171 ◽  
Author(s):  
Caroline D. B. Luft ◽  
Alan Meeson ◽  
Andrew E. Welchman ◽  
Zoe Kourtzi

Learning the structure of the environment is critical for interpreting the current scene and predicting upcoming events. However, the brain mechanisms that support our ability to translate knowledge about scene statistics to sensory predictions remain largely unknown. Here we provide evidence that learning of temporal regularities shapes representations in early visual cortex that relate to our ability to predict sensory events. We tested the participants' ability to predict the orientation of a test stimulus after exposure to sequences of leftward- or rightward-oriented gratings. Using fMRI decoding, we identified brain patterns related to the observers' visual predictions rather than stimulus-driven activity. Decoding of predicted orientations following structured sequences was enhanced after training, while decoding of cued orientations following exposure to random sequences did not change. These predictive representations appear to be driven by the same large-scale neural populations that encode actual stimulus orientation and to be specific to the learned sequence structure. Thus our findings provide evidence that learning temporal structures supports our ability to predict future events by reactivating selective sensory representations as early as in primary visual cortex.


2014 ◽  
Vol 116 (8) ◽  
pp. 1006-1016 ◽  
Author(s):  
Hsiu-Wen Tsai ◽  
Paul W. Davenport

Respiratory load compensation is a sensory-motor reflex generated in the brain stem respiratory neural network. The nucleus of the solitary tract (NTS) is thought to be the primary structure to process the respiratory load-related afferent activity and contribute to the modification of the breathing pattern by sending efferent projections to other structures in the brain stem respiratory neural network. The sensory pathway and motor responses of respiratory load compensation have been studied extensively; however, the mechanism of neurogenesis of load compensation is still unknown. A variety of studies has shown that inhibitory interconnections among the brain stem respiratory groups play critical roles for the genesis of respiratory rhythm and pattern. The purpose of this study was to examine whether inhibitory glycinergic neurons in the NTS were activated by external and transient tracheal occlusions (ETTO) in anesthetized animals. The results showed that ETTO produced load compensation responses with increased inspiratory, expiratory, and total breath time, as well as elevated activation of inhibitory glycinergic neurons in the caudal NTS (cNTS) and intermediate NTS (iNTS). Vagotomized animals receiving transient respiratory loads did not exhibit these load compensation responses. In addition, vagotomy significantly reduced the activation of inhibitory glycinergic neurons in the cNTS and iNTS. The results suggest that these activated inhibitory glycinergic neurons in the NTS might be essential for the neurogenesis of load compensation responses in anesthetized animals.


Author(s):  
Yosef Grodzinsky

AbstractThe prospects of a cognitive neuroscience of syntax are considered with respect to functional neuroanatomy of two seemingly independent systems: Working Memory and syntactic representation and processing. It is proposed that these two systems are more closely related than previously supposed. In particular, it is claimed that a sentence with anaphoric dependencies involves several Working Memories, each entrusted with a different linguistic function. Components of Working Memory reside in the Left Inferior Frontal Gyrus, which is associated with Broca’s region. When lesioned, this area manifests comprehension disruptions in the ability to analyze intra-sentential dependencies, suggesting that Working Memory spans over syntactic computations. The unification of considerations regarding Working Memory with a purely syntactic approach to Broca’s regions leads to the conclusion that mechanisms that compute transformations—and no other syntactic relations—reside in this area.


Sign in / Sign up

Export Citation Format

Share Document