network attractor
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 4)

H-INDEX

3
(FIVE YEARS 1)

2021 ◽  
Vol 53 (2) ◽  
pp. 907-928
Author(s):  
C. Lakshmi ◽  
K. Thenmozhi ◽  
C. Venkatesan ◽  
A. Seshadhri ◽  
John Bosco Balaguru Rayappan ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jennifer Creaser ◽  
Peter Ashwin ◽  
Claire Postlethwaite ◽  
Juliane Britz

AbstractThe brain is intrinsically organized into large-scale networks that constantly re-organize on multiple timescales, even when the brain is at rest. The timing of these dynamics is crucial for sensation, perception, cognition, and ultimately consciousness, but the underlying dynamics governing the constant reorganization and switching between networks are not yet well understood. Electroencephalogram (EEG) microstates are brief periods of stable scalp topography that have been identified as the electrophysiological correlate of functional magnetic resonance imaging defined resting-state networks. Spatiotemporal microstate sequences maintain high temporal resolution and have been shown to be scale-free with long-range temporal correlations. Previous attempts to model EEG microstate sequences have failed to capture this crucial property and so cannot fully capture the dynamics; this paper answers the call for more sophisticated modeling approaches. We present a dynamical model that exhibits a noisy network attractor between nodes that represent the microstates. Using an excitable network between four nodes, we can reproduce the transition probabilities between microstates but not the heavy tailed residence time distributions. We present two extensions to this model: first, an additional hidden node at each state; second, an additional layer that controls the switching frequency in the original network. Introducing either extension to the network gives the flexibility to capture these heavy tails. We compare the model generated sequences to microstate sequences from EEG data collected from healthy subjects at rest. For the first extension, we show that the hidden nodes ‘trap’ the trajectories allowing the control of residence times at each node. For the second extension, we show that two nodes in the controlling layer are sufficient to model the long residence times. Finally, we show that in addition to capturing the residence time distributions and transition probabilities of the sequences, these two models capture additional properties of the sequences including having interspersed long and short residence times and long range temporal correlations in line with the data as measured by the Hurst exponent.


2019 ◽  
Author(s):  
Maria Mørreaunet ◽  
Martin Hägglund

AbstractThe firing pattern of grid cells in rats has been shown to exhibit elastic distortions that compresses and shears the pattern and suggests that the grid is locally anchored. Anchoring points may need to be learned to account for different environments. We recorded grid cells in animals encountering a novel environment. The grid pattern was not stable but moved between the first few sessions predicted by the animals running behavior. Using a learning continuous attractor network model, we show that learning distributed anchoring points may lead to such grid field movement as well as previously observed shearing and compression distortions. The model further predicted topological defects comprising a pentagonal/heptagonal break in the pattern. Grids recorded in large environments were shown to exhibit such topological defects. Taken together, the final pattern may be a compromise between local network attractor states driven by self-motion signals and distributed anchoring inputs from place cells.


2004 ◽  
Vol 161 (3) ◽  
pp. 129-142 ◽  
Author(s):  
Wei Zhang ◽  
Zhiming Wu ◽  
Genke Yang
Keyword(s):  

ICANN ’93 ◽  
1993 ◽  
pp. 27-30 ◽  
Author(s):  
Mitsuyuki Nakao ◽  
Kazuhiko Watanabe ◽  
Yoshinari Mizutani ◽  
Mitsuaki Yamamoto
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document