scholarly journals Sequential propagation and routing of activity in a cortical network

2021 ◽  
Author(s):  
Juan Luis Riquelme ◽  
Mike Hemberger ◽  
Gilles Laurent ◽  
Julijana Gjorgjieva

Single spikes can trigger repeatable sequences of spikes in cortical networks. The mechanisms that support reliable propagation from such small events and their functional consequences for network computations remain unclear. We investigated the conditions in which single spikes trigger reliable and temporally precise sequences in a network model constrained by experimental measurements from turtle cortex. We examined the roles of connectivity, synaptic strength, and spontaneous activity in the generation of sequences. Sparse but strong connections support sequence propagation, while dense but weak connections modulate propagation reliability. Unsupervised clustering reveals that sequences can be decomposed into sub-sequences corresponding to divergent branches of strongly connected neurons. The sparse backbone of strong connections defines few failure points where activity can be selectively gated, enabling the controlled routing of activity. These results reveal how repeatable sequences of activity can be triggered, sustained, and controlled, with significant implications for cortical computations.

1996 ◽  
Vol 75 (1) ◽  
pp. 217-232 ◽  
Author(s):  
J. Xing ◽  
G. L. Gerstein

1. Mechanisms underlying cortical reorganizations were studied using a three-layered neural network model with neuronal groups already formed in the cortical layer. 2. Dynamic changes induced in cortex by behavioral training or intracortical microstimulation (ICMS) were simulated. Both manipulations resulted in reassembly of neuronal groups and formation of stimulus-dependent assemblies. Receptive fields of neurons and cortical representation of inputs also changed. Many neurons that had been weakly responsive or silent became active. 3. Several types of learning models were examined in simulating behavioral training, ICMS-induced dynamic changes, deafferentation, or cortical lesion. Each learning model most accurately reproduced features of experimental data from different manipulations, suggesting that more than one plasticity mechanism might be able to induce dynamic changes in cortex. 4. After skin or cortical stimulation ceased, as spontaneous activity continued, the stimulus-dependent assemblies gradually reverted into structure-dependent neuronal groups. However, relationships among individual neurons and identities of many neurons did not return to their original states. Thus a different set of neurons would be recruited by the same training stimulus sequence on its next presentation. 5. We also reproduced several typical long-term reorganizations caused by pathological manipulations such as cortical lesions, input loss, and digit fusion. 6. In summary, with Hebbian plasticity rules on lateral connections, the network model is capable of reproducing most characteristics of experiments on cortical reorganization. We propose that an important mechanism underlying cortical plastic changes is formation of temporary assemblies that are related to receipt of strongly synchronized localized input. Such stimulus-dependent assemblies can be dissolved by spontaneous activity after removal of the stimuli.


2011 ◽  
Vol 105 (2) ◽  
pp. 757-778 ◽  
Author(s):  
Malte J. Rasch ◽  
Klaus Schuch ◽  
Nikos K. Logothetis ◽  
Wolfgang Maass

A major goal of computational neuroscience is the creation of computer models for cortical areas whose response to sensory stimuli resembles that of cortical areas in vivo in important aspects. It is seldom considered whether the simulated spiking activity is realistic (in a statistical sense) in response to natural stimuli. Because certain statistical properties of spike responses were suggested to facilitate computations in the cortex, acquiring a realistic firing regimen in cortical network models might be a prerequisite for analyzing their computational functions. We present a characterization and comparison of the statistical response properties of the primary visual cortex (V1) in vivo and in silico in response to natural stimuli. We recorded from multiple electrodes in area V1 of 4 macaque monkeys and developed a large state-of-the-art network model for a 5 × 5-mm patch of V1 composed of 35,000 neurons and 3.9 million synapses that integrates previously published anatomical and physiological details. By quantitative comparison of the model response to the “statistical fingerprint” of responses in vivo, we find that our model for a patch of V1 responds to the same movie in a way which matches the statistical structure of the recorded data surprisingly well. The deviation between the firing regimen of the model and the in vivo data are on the same level as deviations among monkeys and sessions. This suggests that, despite strong simplifications and abstractions of cortical network models, they are nevertheless capable of generating realistic spiking activity. To reach a realistic firing state, it was not only necessary to include both N -methyl-d-aspartate and GABAB synaptic conductances in our model, but also to markedly increase the strength of excitatory synapses onto inhibitory neurons (>2-fold) in comparison to literature values, hinting at the importance to carefully adjust the effect of inhibition for achieving realistic dynamics in current network models.


NeuroImage ◽  
2010 ◽  
Vol 52 (3) ◽  
pp. 956-972 ◽  
Author(s):  
Alberto Mazzoni ◽  
Kevin Whittingstall ◽  
Nicolas Brunel ◽  
Nikos K Logothetis ◽  
Stefano Panzeri

2014 ◽  
Vol 5 ◽  
Author(s):  
Frédéric Lavigne ◽  
Francis Avnaïm ◽  
Laurent Dumercy

2002 ◽  
Vol 35 (1) ◽  
pp. 63-87 ◽  
Author(s):  
Shimon Marom ◽  
Goded Shahaf

1. Introduction 631.1 Outline 631.2 Universals versus realizations in the study of learning and memory 642. Large random cortical networks developing ex vivo 652.1 Preparation 652.2 Measuring electrical activity 673. Spontaneous development 693.1 Activity 693.2 Connectivity 704. Consequences of spontaneous activity: pharmacological manipulations 724.1 Structural consequences 724.2 Functional consequences 735. Effects of stimulation 745.1 Response to focal stimulation 745.2 Stimulation-induced changes in connectivity 746. Embedding functionality in real neural networks 776.1 Facing the physiological definition of ‘reward’: two classes of theories 786.2 Closing the loop 797. Concluding remarks 848. Acknowledgments 859. References 85The phenomena of learning and memory are inherent to neural systems that differ from each other markedly. The differences, at the molecular, cellular and anatomical levels, reflect the wealth of possible instantiations of two neural learning and memory universals: (i) an extensive functional connectivity that enables a large repertoire of possible responses to stimuli; and (ii) sensitivity of the functional connectivity to activity, allowing for selection of adaptive responses. These universals can now be fully realized in ex-vivo developing neuronal networks due to advances in multi-electrode recording techniques and desktop computing. Applied to the study of ex-vivo networks of neurons, these approaches provide a unique view into learning and memory in networks, over a wide range of spatio-temporal scales. In this review, we summarize experimental data obtained from large random developing ex-vivo cortical networks. We describe how these networks are prepared, their structure, stages of functional development, and the forms of spontaneous activity they exhibit (Sections 2–4). In Section 5 we describe studies that seek to characterize the rules of activity-dependent changes in neural ensembles and their relation to monosynaptic rules. In Section 6, we demonstrate that it is possible to embed functionality into ex-vivo networks, that is, to teach them to perform desired firing patterns in both time and space. This requires ‘closing a loop’ between the network and the environment. Section 7 emphasizes the potential of ex-vivo developing cortical networks in the study of neural learning and memory universals. This may be achieved by combining closed loop experiments and ensemble-defined rules of activity-dependent change.


2014 ◽  
Author(s):  
Christoph Hartmann ◽  
Andreea Lazar ◽  
Jochen Triesch

AbstractTrial-to-trial variability and spontaneous activity of cortical recordings have been suggested to reflect intrinsic noise. This view is currently challenged by mounting evidence for structure in these phenomena: Trial-to-trial variability decreases following stimulus onset and can be predicted by previous spontaneous activity. This spontaneous activity is similar in magnitude and structure to evoked activity and can predict decisions. Allof the observed neuronal properties described above can be accounted for, at an abstract computational level, by the sampling-hypothesis, according to which response variability reflects stimulus uncertainty. However, a mechanistic explanation at the level of neural circuit dynamics is still missing.In this study, we demonstrate that all of these phenomena can be accounted for by a noise-free self-organizing recurrent neural network model (SORN). It combines spike-timing dependent plasticity (STDP) and homeostatic mechanisms in a deterministic network of excitatory and inhibitory McCulloch-Pitts neurons. The network self-organizes to spatio-temporally varying input sequences.We find that the key properties of neural variability mentioned above develop in this model as the network learns to perform sampling-like inference. Importantly, the model shows high trial-to-trial variability although it is fully deterministic. This suggests that the trial-to-trial variability in neural recordings may not reflect intrinsic noise. Rather, it may reflect a deterministic approximation of sampling-like learning and inference. The simplicity of the model suggests that these correlates of the sampling theory are canonical properties of recurrent networks that learn with a combination of STDP and homeostatic plasticity mechanisms.Author SummaryNeural recordings seem very noisy. If the exact same stimulus is shown to an animal multiple times, the neural response will vary. In fact, the activity of a single neuron shows many features of a stochastic process. Furthermore, in the absence of a sensory stimulus, cortical spontaneous activity has a magnitude comparable to the activity observed during stimulus presentation. These findings have led to a widespread belief that neural activity is indeed very noisy. However, recent evidence indicates that individual neurons can operate very reliably and that the spontaneous activity in the brain is highly structured, suggesting that much of the noise may in fact be signal. One hypothesis regarding this putative signal is that it reflects a form of probabilistic inference through sampling. Here we show that the key features of neural variability can be accounted for in a completely deterministic network model through self-organization. As the network learns a model of its sensory inputs, the deterministic dynamics give rise to sampling-like inference. Our findings show that the notorious variability in neural recordings does not need to be seen as evidence for a noisy brain. Instead it may reflect sampling-like inference emerging from a self-organized learning process.


Sign in / Sign up

Export Citation Format

Share Document