attractor dynamics
Recently Published Documents


TOTAL DOCUMENTS

183
(FIVE YEARS 50)

H-INDEX

24
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Leo Kozachkov ◽  
John Tauber ◽  
Mikael Lundqvist ◽  
Scott L Brincat ◽  
Jean-Jacques Slotine ◽  
...  

Working memory has long been thought to arise from sustained spiking/attractor dynamics. However, recent work has suggested that short-term synaptic plasticity (STSP) may help maintain attractor states over gaps in time with little or no spiking. To determine if STSP endows additional functional advantages, we trained artificial recurrent neural networks (RNNs) with and without STSP to perform an object working memory task. We found that RNNs with and without STSP were both able to maintain memories over distractors presented in the middle of the memory delay. However, RNNs with STSP showed activity that was similar to that seen in the cortex of monkeys performing the same task. By contrast, RNNs without STSP showed activity that was less brain-like. Further, RNNs with STSP were more robust to noise and network degradation than RNNs without STSP. These results show that STSP can not only help maintain working memories, it also makes neural networks more robust.


2021 ◽  
Author(s):  
Anna Kutschireiter ◽  
Melanie A Basnak ◽  
Rachel I Wilson ◽  
Jan Drugowitsch

Efficient navigation requires animals to track their position, velocity and heading direction (HD). Bayesian inference provides a principled framework for estimating these quantities from unreliable sensory observations, yet little is known about how and where Bayesian algorithms could be implemented in the brain's neural networks. Here, we propose a class of recurrent neural networks that track both a dynamic HD estimate and its associated uncertainty. They do so according to a circular Kalman filter, a statistically optimal algorithm for circular estimation. Our network generalizes standard ring attractor models by encoding uncertainty in the amplitude of a bump of neural activity. More generally, we show that near-Bayesian integration is inherent in ring attractor networks, as long as their connectivity strength allows them to sufficiently deviate from the attractor state. Furthermore, we identified the basic network motifs that are required to implement Bayesian inference, and show that these motifs are present in the Drosophila HD system connectome. Overall, our work demonstrates that the Drosophila HD system can in principle implement a dynamic Bayesian inference algorithm in a biologically plausible manner, consistent with recent findings that suggest ring-attractor dynamics underlie the Drosophila HD system.


Author(s):  
Christopher S Dunham ◽  
Sam Lilak ◽  
Joel Hochstetter ◽  
Alon Loeffler ◽  
Ruomin Zhu ◽  
...  

Abstract Numerous studies suggest critical dynamics may play a role in information processing and task performance in biological systems. However, studying critical dynamics in these systems can be challenging due to many confounding biological variables that limit access to the physical processes underpinning critical dynamics. Here we offer a perspective on the use of abiotic, neuromorphic nanowire networks as a means to investigate critical dynamics in complex adaptive systems. Neuromorphic nanowire networks are composed of metallic nanowires and possess metal-insulator-metal junctions. These networks self-assemble into a highly interconnected, variable-density structure and exhibit nonlinear electrical switching properties and information processing capabilities. We highlight key dynamical characteristics observed in neuromorphic nanowire networks, including persistent fluctuations in conductivity with power law distributions, hysteresis, chaotic attractor dynamics, and avalanche criticality. We posit that neuromorphic nanowire networks can function effectively as tunable abiotic physical systems for studying critical dynamics and leveraging criticality for computation.


2021 ◽  
Author(s):  
Jake P Stroud ◽  
Kei Watanabe ◽  
Takafumi Suzuki ◽  
Mark G Stokes ◽  
Máté Lengyel

Working memory involves the short-term maintenance of information and is critical in many tasks. The neural circuit mechanisms underlying this information maintenance are thought to rely on persistent activities resulting from attractor dynamics. However, how information is loaded into working memory for subsequent maintenance remains poorly understood. A pervasive assumption is that information loading requires inputs that are similar to the persistent activities expressed during maintenance. Here, we show through mathematical analysis and numerical simulations that optimal inputs are instead largely orthogonal to persistent activities and naturally generate the rich transient dynamics that are characteristic of prefrontal cortex (PFC) during working memory. By analysing recordings from monkeys performing a memory-guided saccade task, and using a novel, theoretically principled metric, we show that PFC exhibits the hallmarks of optimal information loading. Our theory unifies previous, seemingly conflicting theories of memory maintenance based on attractor or purely sequential dynamics, and reveals a normative principle underlying the widely observed phenomenon of dynamic coding in PFC. These results suggest that optimal information loading may be a key component of attractor dynamics characterising various cognitive functions and cortical areas, including long-term memory and navigation in the hippocampus, and decision making in the PFC.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Brad K Hulse ◽  
Hannah Haberkern ◽  
Romain Franconville ◽  
Daniel B Turner-Evans ◽  
Shinya Takemura ◽  
...  

Flexible behaviors over long timescales are thought to engage recurrent neural networks in deep brain regions, which are experimentally challenging to study. In insects, recurrent circuit dynamics in a brain region called the central complex (CX) enable directed locomotion, sleep, and context- and experience-dependent spatial navigation. We describe the first complete electron-microscopy-based connectome of the Drosophila CX, including all its neurons and circuits at synaptic resolution. We identified new CX neuron types, novel sensory and motor pathways, and network motifs that likely enable the CX to extract the fly’s head-direction, maintain it with attractor dynamics, and combine it with other sensorimotor information to perform vector-based navigational computations. We also identified numerous pathways that may facilitate the selection of CX-driven behavioral patterns by context and internal state. The CX connectome provides a comprehensive blueprint necessary for a detailed understanding of network dynamics underlying sleep, flexible navigation, and state-dependent action selection.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mattia Rosso ◽  
Pieter J. Maes ◽  
Marc Leman

AbstractRhythmic joint coordination is ubiquitous in daily-life human activities. In order to coordinate their actions towards shared goals, individuals need to co-regulate their timing and move together at the collective level of behavior. Remarkably, basic forms of coordinated behavior tend to emerge spontaneously as long as two individuals are exposed to each other’s rhythmic movements. The present study investigated the dynamics of spontaneous dyadic entrainment, and more specifically how they depend on the sensory modalities mediating informational coupling. By means of a novel interactive paradigm, we showed that dyadic entrainment systematically takes place during a minimalistic rhythmic task despite explicit instructions to ignore the partner. Crucially, the interaction was organized by clear dynamics in a modality-dependent fashion. Our results showed highly consistent coordination patterns in visually-mediated entrainment, whereas we observed more chaotic and more variable profiles in the auditorily-mediated counterpart. The proposed experimental paradigm yields empirical evidence for the overwhelming tendency of dyads to behave as coupled rhythmic units. In the context of our experimental design, it showed that coordination dynamics differ according to availability and nature of perceptual information. Interventions aimed at rehabilitating, teaching or training sensorimotor functions can be ultimately informed and optimized by such fundamental knowledge.


2021 ◽  
Author(s):  
Harrison Ritz ◽  
Amitai Shenhav

AbstractWhen faced with distraction, we can focus more on goal-relevant information (targets) or focus less goal-conflicting information (distractors). How people decide to distribute cognitive control across targets and distractors remains unclear. To help address this question, we developed a parametric attentional control task with a graded manipulation to both target discriminability and distractor interference. We find that participants exert independent control over target and distractor processing. We measured control adjustments through the influence of incentives and previous conflict on target and distractor sensitivity, finding that these have dissociable influences on control. Whereas incentives preferentially led to target enhancement, conflict on the previous trial preferentially led to distractor suppression. These distinct drivers of control altered sensitivity to targets and distractors early in the trial, and were promptly followed by reactive reconfiguration towards task-appropriate feature sensitivity. Finally, we provide a process-level account of these findings by show that these control adjustments are well-captured by an evidence accumulation model with attractor dynamics over feature weights. These results help establish a process-level account of control configuration that provides new insights into how multivariate attentional signals are optimized to achieve task goals.


2021 ◽  
Vol 17 (8) ◽  
pp. e1009296
Author(s):  
Tatsuya Haga ◽  
Tomoki Fukai

Our cognition relies on the ability of the brain to segment hierarchically structured events on multiple scales. Recent evidence suggests that the brain performs this event segmentation based on the structure of state-transition graphs behind sequential experiences. However, the underlying circuit mechanisms are poorly understood. In this paper we propose an extended attractor network model for graph-based hierarchical computation which we call the Laplacian associative memory. This model generates multiscale representations for communities (clusters) of associative links between memory items, and the scale is regulated by the heterogenous modulation of inhibitory circuits. We analytically and numerically show that these representations correspond to graph Laplacian eigenvectors, a popular method for graph segmentation and dimensionality reduction. Finally, we demonstrate that our model exhibits chunked sequential activity patterns resembling hippocampal theta sequences. Our model connects graph theory and attractor dynamics to provide a biologically plausible mechanism for abstraction in the brain.


Sign in / Sign up

Export Citation Format

Share Document