attractor network
Recently Published Documents


TOTAL DOCUMENTS

114
(FIVE YEARS 27)

H-INDEX

21
(FIVE YEARS 3)

2021 ◽  
Vol 118 (49) ◽  
pp. e2026092118
Author(s):  
Vezha Boboeva ◽  
Alberto Pezzotta ◽  
Claudia Clopath

Despite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics, such as power laws governing recall capacity. Although abstract random matrix models could explain such laws, the possibility of their implementation in large networks of interacting neurons has so far remained underexplored. We study an attractor network model of long-term memory endowed with firing rate adaptation and global inhibition. Under appropriate conditions, the transitioning behavior of the network from memory to memory is constrained by limit cycles that prevent the network from recalling all memories, with scaling similar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing the standard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the free recall literature, namely, serial position effects, contiguity and forward asymmetry effects, and the semantic effects found to guide memory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variables that affect recall, such as the role of rehearsal, presentation rates, and continuous and/or end-of-list distractor conditions. We predict that recall capacity may be increased with the addition of small amounts of noise, for example, in the form of weak random stimuli during recall. Finally, we predict that, although the statistics of the encoded memories has a strong effect on the recall capacity, the power laws governing recall capacity may still be expected to hold.


2021 ◽  
pp. 1-33
Author(s):  
Kevin Berlemont ◽  
Jean-Pierre Nadal

Abstract In experiments on perceptual decision making, individuals learn a categorization task through trial-and-error protocols. We explore the capacity of a decision-making attractor network to learn a categorization task through reward-based, Hebbian-type modifications of the weights incoming from the stimulus encoding layer. For the latter, we assume a standard layer of a large number of stimu lus-specific neurons. Within the general framework of Hebbian learning, we have hypothesized that the learning rate is modulated by the reward at each trial. Surprisingly, we find that when the coding layer has been optimized in view of the categorization task, such reward-modulated Hebbian learning (RMHL) fails to extract efficiently the category membership. In previous work, we showed that the attractor neural networks' nonlinear dynamics accounts for behavioral confidence in sequences of decision trials. Taking advantage of these findings, we propose that learning is controlled by confidence, as computed from the neural activity of the decision-making attractor network. Here we show that this confidence-controlled, reward-based Hebbian learning efficiently extracts categorical information from the optimized coding layer. The proposed learning rule is local and, in contrast to RMHL, does not require storing the average rewards obtained on previous trials. In addition, we find that the confidence-controlled learning rule achieves near-optimal performance. In accordance with this result, we show that the learning rule approximates a gradient descent method on a maximizing reward cost function.


2021 ◽  
Author(s):  
Wen Yang ◽  
Jinjian Wu ◽  
Leida Li ◽  
Weisheng Dong ◽  
Guangming Shi

2021 ◽  
Author(s):  
Mehdi Fallahnezhad ◽  
Julia Le Mero ◽  
Xhensjana Zenelaj ◽  
Jean Vincent ◽  
Christelle Rochefort ◽  
...  

Head direction (HD) cells, key neuronal elements in the mammalian's navigation system, are hypothesized to act as a continuous attractor network, in which temporal coordination between cell members is maintained under different brain states or external sensory conditions, resembling a unitary neural representation of direction. Whether and how multiple identified HD signals in anatomically separate HD cell structures are part of a single and unique attractor network is currently unknown. By manipulating the cerebellum, we identified pairs of thalamic and retrosplenial HD cells that lose their temporal coordination in the absence of external sensory drive, while the neuronal coordination within each of these brain regions remained intact. Further, we show that distinct cerebellar mechanisms are involved in the stability of direction representation depending on external sensory conditions. These results put forward a new role for the cerebellum in mediating stable and coordinated HD neuronal activity toward a unitary thalamocortical representation of direction.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Anne Löffler ◽  
Anastasia Sylaidi ◽  
Zafeirios Fountas ◽  
Patrick Haggard

AbstractChanges of Mind are a striking example of our ability to flexibly reverse decisions and change our own actions. Previous studies largely focused on Changes of Mind in decisions about perceptual information. Here we report reversals of decisions that require integrating multiple classes of information: 1) Perceptual evidence, 2) higher-order, voluntary intentions, and 3) motor costs. In an adapted version of the random-dot motion task, participants moved to a target that matched both the external (exogenous) evidence about dot-motion direction and a preceding internally-generated (endogenous) intention about which colour to paint the dots. Movement trajectories revealed whether and when participants changed their mind about the dot-motion direction, or additionally changed their mind about which colour to choose. Our results show that decision reversals about colour intentions are less frequent in participants with stronger intentions (Exp. 1) and when motor costs of intention pursuit are lower (Exp. 2). We further show that these findings can be explained by a hierarchical, multimodal Attractor Network Model that continuously integrates higher-order voluntary intentions with perceptual evidence and motor costs. Our model thus provides a unifying framework in which voluntary actions emerge from a dynamic combination of internal action tendencies and external environmental factors, each of which can be subject to Change of Mind.


2021 ◽  
pp. 236-247
Author(s):  
Mario González ◽  
Ángel Sánchez ◽  
David Dominguez ◽  
Francisco B. Rodríguez

2020 ◽  
Author(s):  
Vezha Boboeva ◽  
Alberto Pezzotta ◽  
Claudia Clopath

AbstractDespite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics, such as power laws governing recall capacity. Although abstract random matrix models could explain such laws, the possibility of their implementation in large networks of interacting neurons has so far remained unexplored. We study an attractor network model of long-term memory endowed with firing rate adaptation and global inhibition. Under appropriate conditions, the transitioning behaviour of the network from memory to memory is constrained by limit cycles that prevent the network from recalling all memories, with scaling similar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing the standard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the free recall literature, namely serial position effects, contiguity and forward asymmetry effects, as well as the semantic effects found to guide memory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variables that affect recall, such as the role of rehearsal, presentation rates and (continuous/end-of-list) distractor conditions. We predict that recall capacity may be increased with the addition of small amounts of noise, for example in the form of weak random stimuli during recall. Moreover, we predict that although the statistics of the encoded memories has a strong effect on the recall capacity, the power laws governing recall capacity may still be expected to hold.


2020 ◽  
Author(s):  
Divyansh Mittal ◽  
Rishikesh Narayanan

ABSTRACTGrid cells in the medial entorhinal cortex manifest multiple firing fields, patterned to tessellate external space with triangles. Although two-dimensional continuous attractor network (CAN) models have offered remarkable insights about grid-patterned activity generation, their functional stability in the presence of biological heterogeneities remains unexplored. In this study, we systematically incorporated three distinct forms of intrinsic and synaptic heterogeneities into a rate-based CAN model driven by virtual trajectories, developed here to mimic animal traversals and improve computational efficiency. We found that increasing degrees of biological heterogeneities progressively disrupted the emergence of grid-patterned activity and resulted in progressively large perturbations in neural activity. Quantitatively, grid score and spatial information associated with neural activity reduced progressively with increasing degree of heterogeneities, and perturbations were primarily confined to low-frequency neural activity. We postulated that suppressing low-frequency perturbations could ameliorate the disruptive impact of heterogeneities on grid-patterned activity. To test this, we formulated a strategy to introduce intrinsic neuronal resonance, a physiological mechanism to suppress low-frequency activity, in our rate-based neuronal model by incorporating filters that mimicked resonating conductances. We confirmed the emergence of grid-patterned activity in homogeneous CAN models built with resonating neurons and assessed the impact of heterogeneities on these models. Strikingly, CAN models with resonating neurons were resilient to the incorporation of heterogeneities and exhibited stable grid-patterned firing, through suppression of low-frequency components in neural activity. Our analyses suggest a universal role for intrinsic neuronal resonance, an established mechanism in biological neurons to suppress low-frequency neural activity, in stabilizing heterogeneous network physiology.SIGNIFICANCE STATEMENTA central theme that governs the functional design of biological networks is their ability to sustain stable function despite widespread parametric variability. However, several theoretical and modeling frameworks employ unnatural homogeneous networks in assessing network function owing to the enormous analytical or computational costs involved in assessing heterogeneous networks. Here, we investigate the impact of biological heterogeneities on a powerful two-dimensional continuous attractor network implicated in the emergence of patterned neural activity. We show that network function is disrupted by biological heterogeneities, but is stabilized by intrinsic neuronal resonance, a physiological mechanism that suppresses low-frequency perturbations. As low-frequency perturbations are pervasive across biological systems, mechanisms that suppress low-frequency components could form a generalized route to stabilize heterogeneous biological networks.


2020 ◽  
pp. 260-362
Author(s):  
Edmund T. Rolls

The hippocampal system provides a beautiful example of how different classes of neuronal network in the brain work together as a system to implement episodic memory, the memory for particular recent events. The hippocampus contains spatial view neurons in primates including humans, which provide a representation of locations in viewed space. These representations can be combined with object and temporal representations to provide an episodic memory about what happened where and when. A key part of the system is the CA3 system with its recurrent collateral connections that provide a single attractor network for these associations to be learned. The computational generation of time, encoded by time cells in the hippocampus, is described, and this leads to a theory of hippocampal replay and reverse replay. The computational operation of a key part of the architecture, the recall of memories to the neocortex, is described.


Sign in / Sign up

Export Citation Format

Share Document