scholarly journals Emergence of prefrontal neuron maturation properties by training recurrent neural networks in cognitive tasks

2020 ◽  
Author(s):  
Yichen Henry Liu ◽  
Junda Zhu ◽  
Christos Constantinidis ◽  
Xin Zhou

ABSTRACTWorking memory and response inhibition are functions that mature relatively late in life, after adolescence, paralleling the maturation of the prefrontal cortex. The link between behavioral and neural maturation is not obvious, however, making it challenging to understand how neural activity underlies the maturation of cognitive function. To gain insights into the nature of observed changes in prefrontal activity between adolescence and adulthood, we investigated the progressive changes in unit activity of Recurrent Neural Networks (RNNs) as they were trained to perform working memory and response inhibition tasks. These included increased delay period activity during working memory tasks, and increased activation in antisaccade tasks. These findings reveal universal properties underlying the neuronal computations behind cognitive tasks and explicate the nature of changes that occur as the result of developmental maturation.

iScience ◽  
2021 ◽  
pp. 103178
Author(s):  
Yichen Henry Liu ◽  
Junda Zhu ◽  
Christos Constantinidis ◽  
Xin Zhou

2012 ◽  
Vol 24 (2) ◽  
pp. 523-540 ◽  
Author(s):  
Dimitrije Marković ◽  
Claudius Gros

A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.


2010 ◽  
Vol 22 (2) ◽  
pp. 292-306 ◽  
Author(s):  
Hwamee Oh ◽  
Hoi-Chung Leung

In this fMRI study, we investigated prefrontal cortex (PFC) and visual association regions during selective information processing. We recorded behavioral responses and neural activity during a delayed recognition task with a cue presented during the delay period. A specific cue (“Face” or “Scene”) was used to indicate which one of the two initially viewed pictures of a face and a scene would be tested at the end of a trial, whereas a nonspecific cue (“Both”) was used as control. As expected, the specific cues facilitated behavioral performance (faster response times) compared to the nonspecific cue. A postexperiment memory test showed that the items cued to remember were better recognized than those not cued. The fMRI results showed largely overlapped activations across the three cue conditions in dorsolateral and ventrolateral PFC, dorsomedial PFC, posterior parietal cortex, ventral occipito-temporal cortex, dorsal striatum, and pulvinar nucleus. Among those regions, dorsomedial PFC and inferior occipital gyrus remained active during the entire postcue delay period. Differential activity was mainly found in the association cortices. In particular, the parahippocampal area and posterior superior parietal lobe showed significantly enhanced activity during the postcue period of the scene condition relative to the Face and Both conditions. No regions showed differentially greater responses to the face cue. Our findings suggest that a better representation of visual information in working memory may depend on enhancing the more specialized visual association areas or their interaction with PFC.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 2416 ◽  
Author(s):  
András Szilágyi ◽  
István Zachar ◽  
Anna Fedor ◽  
Harold P. de Vladar ◽  
Eörs Szathmáry

Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.


2021 ◽  
Author(s):  
Daniel B. Ehrlich ◽  
John D. Murray

Real-world tasks require coordination of working memory, decision making, and planning, yet these cognitive functions have disproportionately been studied as independent modular processes in the brain. Here we propose that contingency representations, defined as mappings for how future behaviors depend on upcoming events, can unify working memory and planning computations. We designed a task capable of disambiguating distinct types of representations. Our experiments revealed that human behavior is consistent with contingency representations, and not with traditional sensory models of working memory. In task-optimized recurrent neural networks we investigated possible circuit mechanisms for contingency representations and found that these representations can explain neurophysiological observations from prefrontal cortex during working memory tasks. Finally, we generated falsifiable predictions for neural data to identify contingency representations in neural data and to dissociate different models of working memory. Our findings characterize a neural representational strategy that can unify working memory, planning, and context-dependent decision making.


2016 ◽  
Author(s):  
Thomas Miconi

AbstractNeural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.


2022 ◽  
Author(s):  
Leo Kozachkov ◽  
John Tauber ◽  
Mikael Lundqvist ◽  
Scott L Brincat ◽  
Jean-Jacques Slotine ◽  
...  

Working memory has long been thought to arise from sustained spiking/attractor dynamics. However, recent work has suggested that short-term synaptic plasticity (STSP) may help maintain attractor states over gaps in time with little or no spiking. To determine if STSP endows additional functional advantages, we trained artificial recurrent neural networks (RNNs) with and without STSP to perform an object working memory task. We found that RNNs with and without STSP were both able to maintain memories over distractors presented in the middle of the memory delay. However, RNNs with STSP showed activity that was similar to that seen in the cortex of monkeys performing the same task. By contrast, RNNs without STSP showed activity that was less brain-like. Further, RNNs with STSP were more robust to noise and network degradation than RNNs without STSP. These results show that STSP can not only help maintain working memories, it also makes neural networks more robust.


2021 ◽  
Author(s):  
Quan Wan ◽  
Jorge A. Menendez ◽  
Bradley R. Postle

How does the brain prioritize among the contents of working memory to appropriately guide behavior? Using inverted encoding modeling (IEM), previous work (Wan et al., 2020) showed that unprioritized memory items (UMI) are actively represented in the brain but in a “flipped”, or opposite, format compared to prioritized memory items (PMI). To gain insight into the mechanisms underlying the UMI-to-PMI representational transformation, we trained recurrent neural networks (RNNs) with an LSTM architecture to perform a 2-back working memory task. Visualization of the LSTM hidden layer activity using Principle Component Analysis (PCA) revealed that the UMI representation is rotationally remapped to that of PMI, and this was quantified and confirmed via demixed PCA. The application of the same analyses to the EEG dataset of Wan et al. (2020) revealed similar rotational remapping between the UMI and PMI representations. These results identify rotational remapping as a candidate neural computation employed in the dynamic prioritization within contents of working memory.


Sign in / Sign up

Export Citation Format

Share Document