scholarly journals Assembly formation is stabilized by Parvalbumin neurons and accelerated by Somatostatin neurons

2021 ◽  
Author(s):  
Fereshteh Lagzi ◽  
Martha Canto Bustos ◽  
Anne-Marie Oswald ◽  
Brent Doiron

AbstractLearning entails preserving the features of the external world in the neuronal representations of the brain, and manifests itself in the form of strengthened interactions between neurons within assemblies. Hebbian synaptic plasticity is thought to be one mechanism by which correlations in spiking promote assembly formation during learning. While spike timing dependent plasticity (STDP) rules for excitatory synapses have been well characterized, inhibitory STDP rules remain incomplete, particularly with respect to sub-classes of inhibitory interneurons. Here, we report that in layer 2/3 of the orbitofrontal cortex of mice, inhibition from parvalbumin (PV) interneurons onto excitatory (E) neurons follows a symmetric STDP function and mediates homeostasis in E-neuron firing rates. However, inhibition from somatostatin (SOM) interneurons follows an asymmetric, Hebbian STDP rule. We incorporate these findings in both large scale simulations and mean-field models to investigate how these differences in plasticity impact network dynamics and assembly formation. We find that plasticity of SOM inhibition builds lateral inhibitory connections and increases competition between assemblies. This is reflected in amplified correlations between neurons within assembly and anti-correlations between assemblies. An additional finding is that the emergence of tuned PV inhibition depends on the interaction between SOM and PV STDP rules. Altogether, we show that incorporation of differential inhibitory STDP rules promotes assembly formation through competition, while enhanced inhibition both within and between assemblies protects new representations from degradation after the training input is removed.

2018 ◽  
Vol 29 (3) ◽  
pp. 937-951 ◽  
Author(s):  
Gabriel Koch Ocker ◽  
Brent Doiron

Abstract The synaptic connectivity of cortex is plastic, with experience shaping the ongoing interactions between neurons. Theoretical studies of spike timing-dependent plasticity (STDP) have focused on either just pairs of neurons or large-scale simulations. A simple analytic account for how fast spike time correlations affect both microscopic and macroscopic network structure is lacking. We develop a low-dimensional mean field theory for STDP in recurrent networks and show the emergence of assemblies of strongly coupled neurons with shared stimulus preferences. After training, this connectivity is actively reinforced by spike train correlations during the spontaneous dynamics. Furthermore, the stimulus coding by cell assemblies is actively maintained by these internally generated spiking correlations, suggesting a new role for noise correlations in neural coding. Assembly formation has often been associated with firing rate-based plasticity schemes; our theory provides an alternative and complementary framework, where fine temporal correlations and STDP form and actively maintain learned structure in cortical networks.


2016 ◽  
Author(s):  
Gabriel Koch Ocker ◽  
Brent Doiron

AbstractThe synaptic connectivity of cortex is plastic, with experience shaping the ongoing interactions between neurons. Theoretical studies of spike timing–dependent plasticity (STDP) have focused on either just pairs of neurons or large-scale simulations. A simple analytic account for how fast spike time correlations affect both micro- and macroscopic network structure is lacking. We develop a low-dimensional mean field theory for STDP in recurrent networks and show the emergence of assemblies of strongly reciprocally coupled neurons with shared stimulus preferences. After training this connectivity is actively reinforced by spike train correlations during the spontaneous dynamics. Furthermore, the stimulus coding by cell assemblies is actively maintained by these internally generated spiking correlations, suggesting a new role for noise correlations in neural coding. Assembly formation has been often associated with firing rate-based plasticity schemes; our theory provides an alternative and complementary framework, where fine temporal correlations and STDP form and actively maintain learned structure in cortical networks.


2019 ◽  
Vol 16 (1) ◽  
Author(s):  
Włodzisław Duch ◽  
Dariusz Mikołajewski

Abstract Despite great progress in understanding the functions and structures of the central nervous system (CNS) the brain stem remains one of the least understood systems. We know that the brain stem acts as a decision station preparing the organism to act in a specific way, but such functions are rather difficult to model with sufficient precision to replicate experimental data due to the scarcity of data and complexity of large-scale simulations of brain stem structures. The approach proposed in this article retains some ideas of previous models, and provides more precise computational realization that enables qualitative interpretation of the functions played by different network states. Simulations are aimed primarily at the investigation of general switching mechanisms which may be executed in brain stem neural networks, as far as studying how the aforementioned mechanisms depend on basic neural network features: basic ionic channels, accommodation, and the influence of noise.


2021 ◽  
Author(s):  
Lyndsay Kerr ◽  
Duncan Sproul ◽  
Ramon Grima

The accurate establishment and maintenance of DNA methylation patterns is vital for mammalian development and disruption to these processes causes human disease. Our understanding of DNA methylation mechanisms has been facilitated by mathematical modelling, particularly stochastic simulations. Mega-base scale variation in DNA methylation patterns is observed in development, cancer and ageing and the mechanisms generating these patterns are little understood. However, the computational cost of stochastic simulations prevents them from modelling such large genomic regions. Here we test the utility of three different mean-field models to predict large-scale DNA methylation patterns. By comparison to stochastic simulations, we show that a cluster mean-field model accurately predicts the statistical properties of steady-state DNA methylation patterns, including the mean and variance of methylation levels calculated across a system of CpG sites, as well as the covariance and correlation of methylation levels between neighbouring sites. We also demonstrate that a cluster mean-field model can be used within an approximate Bayesian computation framework to accurately infer model parameters from data. As mean-field models can be solved numerically in a few seconds, our work demonstrates their utility for understanding the processes underpinning large-scale DNA methylation patterns.


2018 ◽  
Author(s):  
Matteo di Volo ◽  
Alberto Romagnoni ◽  
Cristiano Capone ◽  
Alain Destexhe

AbstractAccurate population models are needed to build very large scale neural models, but their derivation is difficult for realistic networks of neurons, in particular when nonlinear properties are involved such as conductance-based interactions and spike-frequency adaptation. Here, we consider such models based on networks of Adaptive exponential Integrate and fire excitatory and inhibitory neurons. Using a Master Equation formalism, we derive a mean-field model of such networks and compare it to the full network dynamics. The mean-field model is capable to correctly predict the average spontaneous activity levels in asynchronous irregular regimes similar to in vivo activity. It also captures the transient temporal response of the network to complex external inputs. Finally, the mean-field model is also able to quantitatively describe regimes where high and low activity states alternate (UP-DOWN state dynamics), leading to slow oscillations. We conclude that such mean-field models are “biologically realistic” in the sense that they can capture both spontaneous and evoked activity, and they naturally appear as candidates to build very large scale models involving multiple brain areas.


2019 ◽  
Author(s):  
Niels Trusbak Haumann ◽  
Minna Huotilainen ◽  
Peter Vuust ◽  
Elvira Brattico

AbstractThe accuracy of electroencephalography (EEG) and magnetoencephalography (MEG) is challenged by overlapping sources from within the brain. This lack of accuracy is a severe limitation to the possibilities and reliability of modern stimulation protocols in basic research and clinical diagnostics. As a solution, we here introduce a theory of stochastic neuronal spike timing probability densities for describing the large-scale spiking activity in neural networks, and a novel spike density component analysis (SCA) method for isolating specific neural sources. Three studies are conducted based on 564 cases of evoked responses to auditory stimuli from 94 human subjects each measured with 60 EEG electrodes and 306 MEG sensors. In the first study we show that the large-scale spike timing (but not non-encephalographic artifacts) in MEG/EEG waveforms can be modeled with Gaussian probability density functions with high accuracy (median 99.7%-99.9% variance explained), while gamma and sine functions fail describing the MEG and EEG waveforms. In the second study we confirm that SCA can isolate a specific evoked response of interest. Our findings indicate that the mismatch negativity (MMN) response is accurately isolated with SCA, while principal component analysis (PCA) fails supressing interference from overlapping brain activity, e.g. from P3a and alpha waves, and independent component analysis (ICA) distorts the evoked response. Finally, we confirm that SCA accurately reveals inter-individual variation in evoked brain responses, by replicating findings relating individual traits with MMN variations. The findings of this paper suggest that the commonly overlapping neural sources in single-subject or patient data can be more accurately separated by applying the introduced theory of large-scale spike timing and method of SCA in comparison to PCA and ICA.Significance statementElectroencephalography (EEG) and magnetoencelopraphy (MEG) are among the most applied non-invasive brain recording methods in humans. They are the only methods to measure brain function directly and in time resolutions smaller than seconds. However, in modern research and clinical diagnostics the brain responses of interest cannot be isolated, because of interfering signals of other ongoing brain activity. For the first time, we introduce a theory and method for mathematically describing and isolating overlapping brain signals, which are based on prior intracranial in vivo research on brain cells in monkey and human neural networks. Three studies mutually support our theory and suggest that a new level of accuracy in MEG/EEG can achieved by applying the procedures presented in this paper.


2008 ◽  
Vol 4 (S259) ◽  
pp. 467-478 ◽  
Author(s):  
Detlef Elstner ◽  
Oliver Gressel ◽  
Günther Rüdiger

AbstractRecent simulations of supernova-driven turbulence within the ISM support the existence of a large-scale dynamo. With a growth time of about two hundred million years, the dynamo is quite fast – in contradiction to many assertions in the literature. We here present details on the scaling of the dynamo effect within the simulations and discuss global mean-field models based on the adopted turbulence coefficients. The results are compared to global simulations of the magneto-rotational instability.


Author(s):  
Yu Qi ◽  
Jiangrong Shen ◽  
Yueming Wang ◽  
Huajin Tang ◽  
Hang Yu ◽  
...  

Spiking neural networks (SNNs) are considered to be biologically plausible and power-efficient on neuromorphic hardware. However, unlike the brain mechanisms, most existing SNN algorithms have fixed network topologies and connection relationships. This paper proposes a method to jointly learn network connections and link weights simultaneously. The connection structures are optimized by the spike-timing-dependent plasticity (STDP) rule with timing information, and the link weights are optimized by a supervised algorithm. The connection structures and the weights are learned alternately until a termination condition is satisfied. Experiments are carried out using four benchmark datasets. Our approach outperforms classical learning methods such as STDP, Tempotron, SpikeProp, and a state-of-the-art supervised algorithm. In addition, the learned structures effectively reduce the number of connections by about 24%, thus facilitate the computational efficiency of the network.


2020 ◽  
Author(s):  
Subhashini Sivagnanam ◽  
Wyatt Gorman ◽  
Donald Doherty ◽  
Samuel A Neymotin ◽  
Stephen Fang ◽  
...  

Biophysically detailed modeling provides an unmatched method to integrate data from many disparate experimental studies, and manipulate and explore with high precision the resulting brain circuit simulation. We developed a detailed model of the brain motor cortex circuits, simulating over 10,000 biophysically detailed neurons and 30 million synaptic connections. Optimization and evaluation of the cortical model parameters and responses was achieved via parameter exploration using grid search parameter sweeps and evolutionary algorithms. This involves running tens of thousands of simulations, with each simulated second of the full circuit model requiring approximately 50 cores hours. This paper describes our experience in setting up and using Google Compute Platform (GCP) with Slurm to run these large-scale simulations. We describe the best practices and solutions to the issues that arose during the process, and present preliminary results from running simulations on GCP.


Sign in / Sign up

Export Citation Format

Share Document