scholarly journals The interaction between neural populations: Additive versus diffusive coupling

2021 ◽  
Author(s):  
Marinho Antunes Lopes ◽  
Khalid Hamandi ◽  
Jiaxiang Zhang ◽  
Jen Creaser

Models of networks of populations of neurons commonly assume that the interactions between neural populations are via additive or diffusive coupling. When using the additive coupling, a population's activity is affected by the sum of the activities of neighbouring populations. In contrast, when using the diffusive coupling a neural population is affected by the sum of the differences between its activity and the activity of its neighbours. These two coupling functions have been used interchangeably for similar applications. Here, we show that the choice of coupling can lead to strikingly different brain network dynamics. We focus on a model of seizure transitions that has been used both with additive and diffusive coupling in the literature. We consider networks with two and three nodes, and large random and scale-free networks with 64 nodes. We further assess functional networks inferred from magnetoencephalography (MEG) from people with epilepsy and healthy controls. To characterize the seizure dynamics on these networks, we use the escape time, the brain network ictogenicity (BNI) and the node ictogenicity (NI), which are measures of the network's global and local ability to generate seizures. Our main result is that the level of ictogenicity of a network is strongly dependent on the coupling function. We find that people with epilepsy have higher additive BNI than controls, as hypothesized, while the diffusive BNI provides the opposite result. Moreover, individual nodes that are more likely to drive seizures with one type of coupling are more likely to prevent seizures with the other coupling function. Our results on the MEG networks and evidence from the literature suggest that the additive coupling may be a better modelling choice than the diffusive coupling, at least for BNI and NI studies. Thus, we highlight the need to motivate and validate the choice of coupling in future studies.

2021 ◽  
Vol 44 (1) ◽  
Author(s):  
Rava Azeredo da Silveira ◽  
Fred Rieke

Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative, and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout this review, we emphasize a geometrical picture of how noise correlations impact the neural code. Expected final online publication date for the Annual Review of Neuroscience, Volume 44 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2022 ◽  
Vol 27 (1) ◽  
pp. 1-30
Author(s):  
Mengke Ge ◽  
Xiaobing Ni ◽  
Xu Qi ◽  
Song Chen ◽  
Jinglei Huang ◽  
...  

Brain network is a large-scale complex network with scale-free, small-world, and modularity properties, which largely supports this high-efficiency massive system. In this article, we propose to synthesize brain-network-inspired interconnections for large-scale network-on-chips. First, we propose a method to generate brain-network-inspired topologies with limited scale-free and power-law small-world properties, which have a low total link length and extremely low average hop count approximately proportional to the logarithm of the network size. In addition, given the large-scale applications, considering the modularity of the brain-network-inspired topologies, we present an application mapping method, including task mapping and deterministic deadlock-free routing, to minimize the power consumption and hop count. Finally, a cycle-accurate simulator BookSim2 is used to validate the architecture performance with different synthetic traffic patterns and large-scale test cases, including real-world communication networks for the graph processing application. Experiments show that, compared with other topologies and methods, the brain-network-inspired network-on-chips (NoCs) generated by the proposed method present significantly lower average hop count and lower average latency. Especially in graph processing applications with a power-law and tightly coupled inter-core communication, the brain-network-inspired NoC has up to 70% lower average hop count and 75% lower average latency than mesh-based NoCs.


2020 ◽  
Author(s):  
Paul Triebkorn ◽  
Joelle Zimmermann ◽  
Leon Stefanovski ◽  
Dipanjan Roy ◽  
Ana Solodkin ◽  
...  

AbstractUsing The Virtual Brain (TVB, thevirtualbrian.org) simulation platform, we explored for 50 individual adult human brains (ages 18-80), how personalized connectome based brain network modelling captures various empirical observations as measured by functional magnetic resonance imaging (fMRI) and electroencephalography (EEG). We compare simulated activity based on individual structural connectomes (SC) inferred from diffusion weighted imaging with fMRI and EEG in the resting state. We systematically explore the role of the following model parameters: conduction velocity, global coupling and graph theoretical features of individual SC. First, a subspace of the parameter space is identified for each subject that results in realistic brain activity, i.e. reproducing the following prominent features of empirical EEG-fMRI activity: topology of resting-state fMRI functional connectivity (FC), functional connectivity dynamics (FCD), electrophysiological oscillations in the delta (3-4 Hz) and alpha (8-12 Hz) frequency range and their bimodality, i.e. low and high energy modes. Interestingly, FCD fit, bimodality and static FC fit are highly correlated. They all show their optimum in the same range of global coupling. In other words, only when our local model is in a bistable regime we are able to generate switching of modes in our global network. Second, our simulations reveal the explicit network mechanisms that lead to electrophysiological oscillations, their bimodal behaviour and inter-regional differences. Third, we discuss biological interpretability of the Stefanescu-Jirsa-Hindmarsh-Rose-3D model when embedded inside the large-scale brain network and mechanisms underlying the emergence of bimodality of the neural signal.With the present study, we set the cornerstone for a systematic catalogue of spatiotemporal brain activity regimes generated with the connectome-based brain simulation platform The Virtual Brain.Author SummaryIn order to understand brain dynamics we use numerical simulations of brain network models. Combining the structural backbone of the brain, that is the white matter fibres connecting distinct regions in the grey matter, with dynamical systems describing the activity of neural populations we are able to simulate brain function on a large scale. In order to make accurate prediction with this network, it is crucial to determine optimal model parameters. We here use an explorative approach to adjust model parameters to individual brain activity, showing that subjects have their own optimal point in the parameter space, depending on their brain structure and function. At the same time, we investigate the relation between bistable phenomena on the scale of neural populations and the changed in functional connectivity on the brain network scale. Our results are important for future modelling approaches trying to make accurate predictions of brain function.


2017 ◽  
Author(s):  
Matthew G. Perich ◽  
Juan A. Gallego ◽  
Lee E. Miller

AbstractLong-term learning of language, mathematics, and motor skills likely requires plastic changes in the cortex, but behavior often requires faster changes, sometimes based even on single errors. Here, we show evidence of one mechanism by which the brain can rapidly develop new motor output, seemingly without altering the functional connectivity between or within cortical areas. We recorded simultaneously from hundreds of neurons in the premotor (PMd) and primary motor (M1) cortices, and computed models relating these neural populations throughout adaptation to reaching movement perturbations. We found a signature of learning in the “null subspace” of PMd with respect to M1. Earlier experiments have shown that null subspace activity allows the motor cortex to alter preparatory activity without directly influencing M1. In our experiments, the null subspace planning activity evolved with the adaptation, yet the “potent” mapping that captures information sent to M1 was preserved. Our results illustrate a population-level mechanism within the motor cortices to adjust the output from one brain area to its downstream structures that could be exploited throughout the brain for rapid, online behavioral adaptation.


2020 ◽  
Author(s):  
James C. Pang ◽  
Leonardo L. Gollo ◽  
James A. Roberts

AbstractSynchronization is a collective mechanism by which oscillatory networks achieve their functions. Factors driving synchronization include the network’s topological and dynamical properties. However, how these factors drive the emergence of synchronization in the presence of potentially disruptive external inputs like stochastic perturbations is not well understood, particularly for real-world systems such as the human brain. Here, we aim to systematically address this problem using a large-scale model of the human brain network (i.e., the human connectome). The results show that the model can produce complex synchronization patterns transitioning between incoherent and coherent states. When nodes in the network are coupled at some critical strength, a counterintuitive phenomenon emerges where the addition of noise increases the synchronization of global and local dynamics, with structural hub nodes benefiting the most. This stochastic synchronization effect is found to be driven by the intrinsic hierarchy of neural timescales of the brain and the heterogeneous complex topology of the connectome. Moreover, the effect coincides with clustering of node phases and node frequencies and strengthening of the functional connectivity of some of the connectome’s subnetworks. Overall, the work provides broad theoretical insights into the emergence and mechanisms of stochastic synchronization, highlighting its putative contribution in achieving network integration underpinning brain function.


2019 ◽  
Author(s):  
Caroline Garcia Forlim ◽  
Siavash Haghiri ◽  
Sandra Düzel ◽  
Simone Kühn

AbstractIn recent years, there has been a massive effort to analyze the topological properties of brain networks. Yet, one of the challenging questions in the field is how to construct brain networks based on the connectivity values derived from neuroimaging methods. From a theoretical point of view, it is plausible that the brain would have evolved to minimize energetic costs of information processing, and therefore, maximizes efficiency as well as to redirect its function in an adaptive fashion, that is, resilience. A brain network with such features, when characterized using graph analysis, would present small-world and scale-free properties.In this paper, we focused on how the brain network is constructed by introducing and testing an alternative method: k-nearest neighbor (kNN). In addition, we compared the kNN method with one of the most common methods in neuroscience: namely the density threshold. We performed our analyses on functional connectivity matrices derived from resting state fMRI of a big imaging cohort (N=434) of young and older healthy participants. The topology of networks was characterized by the graph measures degree, characteristic path length, clustering coefficient and small world. In addition, we verified whether kNN produces scale-free networks. We showed that networks built by kNN presented advantages over traditional thresholding methods, namely greater values for small-world (linked to efficiency of networks) than those derived by means of density thresholds and moreover, it presented also scale-free properties (linked to the resilience of networks), where density threshold did not. A brain network with such properties would have advantages in terms of efficiency, rapid adaptive reconfiguration and resilience, features of brain networks that are relevant for plasticity and cognition as well as neurological diseases as stroke and dementia.HighlightsA novel thresholding method for brain networks based on k-nearest neighbors (kNN)kNN applied on resting state fMRI from a big cohort of healthy subjects BASE-IIkNN built networks present greater small world properties than density thresholdkNN built networks present scale-free properties whereas density threshold did not


2016 ◽  
Author(s):  
S.J. Hanson ◽  
D. Mastrovito ◽  
C. Hanson ◽  
J. Ramsey ◽  
C. Glymour

AbstractScale-free networks (SFN) arise from simple growth processes, which can encourage efficient, centralized and fault tolerant communication (1). Recently its been shown that stable network hub structure is governed by a phase transition at exponents (>2.0) causing a dramatic change in network structure including a loss of global connectivity, an increasing minimum dominating node set, and a shift towards increasing connectivity growth compared to node growth. Is this SFN shift identifiable in atypical brain activity? The Pareto Distribution (P(D)∼D∧-β) on the hub Degree (D) is a signature of scale-free networks. During resting-state, we assess Degree exponents across a large range of neurotypical and atypical subjects. We use graph complexity theory to provide a predictive theory of the brain network structure. Results.We show that neurotypical resting-state fMRI brain activity possess scale-free Pareto exponents (1.8 se .01) in a single individual scanned over 66 days as well as in 60 different individuals (1.8 se .02). We also show that 60 individuals with Autistic Spectrum Disorder, and 60 individuals with Schizophrenia have significantly higher (>2.0) scale-free exponents (2.4 se .03, 2.3 se .04), indicating more fractionated and less controllable dynamics in the brain networks revealed in resting state. Finally we show that the exponent values vary with phenotypic measures of atypical disease severity indicating that the global topology of the network itself can provide specific diagnostic biomarkers for atypical brain activity.


Author(s):  
Davor Curic ◽  
Victorita E. Ivan ◽  
David T. Cuesta ◽  
Ingrid M. Esteves ◽  
Majid H. Mohajerani ◽  
...  

Abstract Observations of neurons in a resting brain and neurons in cultures often display spontaneous scale-free collective dynamics in the form of information cascades, also called “neuronal avalanches”. This has motivated the so called critical brain hypothesis which posits that the brain is self-tuned to a critical point or regime, separating exponentially-growing dynamics from quiescent states, to achieve optimality. Yet, how such optimality of information transmission is related to behaviour and whether it persists under behavioural transitions has remained a fundamental knowledge gap. Here, we aim to tackle this challenge by studying behavioural transitions in mice using two-photon calcium imaging of the retrosplenial cortex -- an area of the brain well positioned to integrate sensory, mnemonic, and cognitive information by virtue of its strong connectivity with the hippocampus, medial prefrontal cortex, and primary sensory cortices. Our work shows that the response of the underlying neural population to behavioural transitions can vary significantly between different sub-populations such that one needs to take the structural and functional network properties of these sub-populations into account to understand the properties at the total population level. Specifically, we show that the retrosplenial cortex contains at least one sub-population capable of switching between two different scale-free regimes, indicating an intricate relationship between behaviour and the optimality of neuronal response at the subgroup level. This asks for a potential reinterpretation of the emergence of self-organized criticality in neuronal systems.


2018 ◽  
Vol 15 (8) ◽  
pp. 743-750 ◽  
Author(s):  
Kresimir Ukalovic ◽  
Sijia Cao ◽  
Sieun Lee ◽  
Qiaoyue Tang ◽  
Mirza Faisal Beg ◽  
...  

Background: Recent work on Alzheimer's disease (AD) diagnosis focuses on neuroimaging modalities; however, these methods are expensive, invasive, and not available to all patients. Ocular imaging of biomarkers, such as drusen in the peripheral retina, could provide an alternative method to diagnose AD. Objective: This study compares macular and peripheral drusen load in control and AD eyes. Methods: Postmortem eye tissues were obtained from donors with a neuropathological diagnosis of AD. Retina from normal donors were processed and categorized into younger (<55 years) and older (>55 years) groups. After fixation and dissection, 3-6 mm punches of RPE/choroid were taken in macular and peripheral (temporal, superior, and inferior) retinal regions. Oil red O positive drusen were counted and grouped into two size categories: small (<63 μm) and intermediate (63-125 μm). Results: There was a significant increase in the total number of macular and peripheral hard drusen in older, compared to younger, normal eyes (p<0.05). Intermediate hard drusen were more commonly found in the temporal region of AD eyes compared to older normal eyes, even after controlling for age (p<0.05). Among the brain and eye tissues from AD donors, there was a significant relationship between cerebral amyloid angiopathy (CAA) severity and number of temporal intermediate hard drusen (r=0.78, p<0.05). Conclusion: Imaging temporal drusen in the eye may have benefit for diagnosing and monitoring progression of AD. Our results on CAA severity and temporal intermediate drusen in the AD eye are novel. Future studies are needed to further understand the interactions among CAA and drusen formation.


Sign in / Sign up

Export Citation Format

Share Document