scholarly journals Intrinsic Adaptation in Autonomous Recurrent Neural Networks

2012 ◽  
Vol 24 (2) ◽  
pp. 523-540 ◽  
Author(s):  
Dimitrije Marković ◽  
Claudius Gros

A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.

2020 ◽  
Vol 375 (1799) ◽  
pp. 20190231 ◽  
Author(s):  
David Tingley ◽  
Adrien Peyrache

A major task in the history of neurophysiology has been to relate patterns of neural activity to ongoing external stimuli. More recently, this approach has branched out to relating current neural activity patterns to external stimuli or experiences that occurred in the past or future. Here, we aim to review the large body of methodological approaches used towards this goal, and to assess the assumptions each makes with reference to the statistics of neural data that are commonly observed. These methods primarily fall into two categories, those that quantify zero-lag relationships without examining temporal evolution, termed reactivation , and those that quantify the temporal structure of changing activity patterns, termed replay . However, no two studies use the exact same approach, which prevents an unbiased comparison between findings. These observations should instead be validated by multiple and, if possible, previously established tests. This will help the community to speak a common language and will eventually provide tools to study, more generally, the organization of neuronal patterns in the brain. This article is part of the Theo Murphy meeting issue ‘Memory reactivation: replaying events past, present and future’.


2021 ◽  
Vol 2094 (3) ◽  
pp. 032041
Author(s):  
S I Bartsev ◽  
G M Markova

Abstract The study is concerned with the comparison of two methods for identification of stimulus received by artificial neural network using neural activity pattern that corresponds to the period of storing information about this stimulus in the working memory. We used simple recurrent neural networks learned to pass the delayed matching-to-sample test. Neural activity was detected at the period of pause between receiving stimuli. The analysis of neural excitation patterns showed that neural networks encoded variables that were relevant for the task during the delayed matching-to-sample test, and their activity patterns were dynamic. The method of centroids allowed identifying the type of the received stimuli with efficiency up to 75% while the method of neural network-based decoder showed 100% efficiency. In addition, this method was applied to determine the minimal set of neurons whose activity was the most significant for stimulus recognition.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 2416 ◽  
Author(s):  
András Szilágyi ◽  
István Zachar ◽  
Anna Fedor ◽  
Harold P. de Vladar ◽  
Eörs Szathmáry

Background: The fact that surplus connections and neurons are pruned during development is well established. We complement this selectionist picture by a proof-of-principle model of evolutionary search in the brain, that accounts for new variations in theory space. We present a model for Darwinian evolutionary search for candidate solutions in the brain. Methods: We combine known components of the brain – recurrent neural networks (acting as attractors), the action selection loop and implicit working memory – to provide the appropriate Darwinian architecture. We employ a population of attractor networks with palimpsest memory. The action selection loop is employed with winners-share-all dynamics to select for candidate solutions that are transiently stored in implicit working memory. Results: We document two processes: selection of stored solutions and evolutionary search for novel solutions. During the replication of candidate solutions attractor networks occasionally produce recombinant patterns, increasing variation on which selection can act. Combinatorial search acts on multiplying units (activity patterns) with hereditary variation and novel variants appear due to (i) noisy recall of patterns from the attractor networks, (ii) noise during transmission of candidate solutions as messages between networks, and, (iii) spontaneously generated, untrained patterns in spurious attractors. Conclusions: Attractor dynamics of recurrent neural networks can be used to model Darwinian search. The proposed architecture can be used for fast search among stored solutions (by selection) and for evolutionary search when novel candidate solutions are generated in successive iterations. Since all the suggested components are present in advanced nervous systems, we hypothesize that the brain could implement a truly evolutionary combinatorial search system, capable of generating novel variants.


2018 ◽  
Author(s):  
Patrick Krauss ◽  
Marc Schuster ◽  
Verena Dietrich ◽  
Achim Schilling ◽  
Holger Schulze ◽  
...  

AbstractRecurrent neural networks are complex non-linear systems, capable of ongoing activity in the absence of driving inputs. The dynamical properties of these systems, in particular their long-time attractor states, are determined on the microscopic level by the connection strengths wij between the individual neurons. However, little is known to which extent network dynamics is tunable on a more coarse-grained level by the statistical features of the weight matrix. In this work, we investigate the dynamical impact of three statistical parameters: density (the fraction of non-zero connections), balance (the ratio of excitatory to inhibitory connections), and symmetry (the fraction of neuron pairs with wij = wji). By computing a ‘phase diagram’ of network dynamics, we find that balance is the essential control parameter: Its gradual increase from negative to positive values drives the system from oscillatory behavior into a chaotic regime, and eventually into stationary fix points. Only directly at the border of the chaotic regime do the neural networks display rich but regular dynamics, thus enabling actual information processing. These results suggest that the brain, too, is fine-tuned to the ‘edge of chaos’ by assuring a proper balance between excitatory and inhibitory neural connections.Author summaryComputations in the brain need to be both reproducible and sensitive to changing input from the environment. It has been shown that recurrent neural networks can meet these simultaneous requirements only in a particular dynamical regime, called the edge of chaos in non-linear systems theory. Here, we demonstrate that recurrent neural networks can be easily tuned to this critical regime of optimal information processing by assuring a proper ratio of excitatory and inhibitory connections between the neurons. This result is in line with several micro-anatomical studies of the cortex, which frequently confirm that the excitatory-inhibitory balance is strictly conserved in the cortex. Furthermore, it turns out that neural dynamics is largely independent from the total density of connections, a feature that explains how the brain remains functional during periods of growth or decay. Finally, we find that the existence of too many symmetric connections is detrimental for the above mentioned critical dynamical regime, but maybe in turn useful for pattern completion tasks.


Author(s):  
Sou Nobukawa ◽  
Nobuhiko Wagatsuma ◽  
Takashi Ikeda ◽  
Chiaki Hasegawa ◽  
Mitsuru Kikuchi ◽  
...  

AbstractSynchronization of neural activity, especially at the gamma band, contributes to perceptual functions. In several psychiatric disorders, deficits of perceptual functions are reflected in synchronization abnormalities. Plausible cause of this impairment is an alteration in the balance between excitation and inhibition (E/I balance); a disruption in the E/I balance leads to abnormal neural interactions reminiscent of pathological states. Moreover, the local lateral excitatory-excitatory synaptic connections in the cortex exhibit excitatory postsynaptic potentials (EPSPs) that follow a log-normal amplitude distribution. This long-tailed distribution is considered an important factor for the emergence of spatiotemporal neural activity. In this context, we hypothesized that manipulating the EPSP distribution under abnormal E/I balance conditions would provide insights into psychiatric disorders characterized by deficits in perceptual functions, potentially revealing the mechanisms underlying pathological neural behaviors. In this study, we evaluated the synchronization of neural activity with external periodic stimuli in spiking neural networks in cases of both E/I balance and imbalance with or without a long-tailed EPSP amplitude distribution. The results showed that external stimuli of a high frequency lead to a decrease in the degree of synchronization with an increasing ratio of excitatory to inhibitory neurons in the presence, but not in the absence, of high-amplitude EPSPs. This monotonic reduction can be interpreted as an autonomous, strong-EPSP-dependent spiking activity selectively interfering with the responses to external stimuli. This observation is consistent with pathological findings. Thus, our modeling approach has potential to improve the understanding of the steady-state response in both healthy and pathological states.


Sign in / Sign up

Export Citation Format

Share Document