Large-Scale Simulations of the Brain: Is There a “Right” Level of Detail?

Author(s):  
Edoardo Datteri
2019 ◽  
Vol 16 (1) ◽  
Author(s):  
Włodzisław Duch ◽  
Dariusz Mikołajewski

Abstract Despite great progress in understanding the functions and structures of the central nervous system (CNS) the brain stem remains one of the least understood systems. We know that the brain stem acts as a decision station preparing the organism to act in a specific way, but such functions are rather difficult to model with sufficient precision to replicate experimental data due to the scarcity of data and complexity of large-scale simulations of brain stem structures. The approach proposed in this article retains some ideas of previous models, and provides more precise computational realization that enables qualitative interpretation of the functions played by different network states. Simulations are aimed primarily at the investigation of general switching mechanisms which may be executed in brain stem neural networks, as far as studying how the aforementioned mechanisms depend on basic neural network features: basic ionic channels, accommodation, and the influence of noise.


2020 ◽  
Author(s):  
Subhashini Sivagnanam ◽  
Wyatt Gorman ◽  
Donald Doherty ◽  
Samuel A Neymotin ◽  
Stephen Fang ◽  
...  

Biophysically detailed modeling provides an unmatched method to integrate data from many disparate experimental studies, and manipulate and explore with high precision the resulting brain circuit simulation. We developed a detailed model of the brain motor cortex circuits, simulating over 10,000 biophysically detailed neurons and 30 million synaptic connections. Optimization and evaluation of the cortical model parameters and responses was achieved via parameter exploration using grid search parameter sweeps and evolutionary algorithms. This involves running tens of thousands of simulations, with each simulated second of the full circuit model requiring approximately 50 cores hours. This paper describes our experience in setting up and using Google Compute Platform (GCP) with Slurm to run these large-scale simulations. We describe the best practices and solutions to the issues that arose during the process, and present preliminary results from running simulations on GCP.


2021 ◽  
Author(s):  
Fereshteh Lagzi ◽  
Martha Canto Bustos ◽  
Anne-Marie Oswald ◽  
Brent Doiron

AbstractLearning entails preserving the features of the external world in the neuronal representations of the brain, and manifests itself in the form of strengthened interactions between neurons within assemblies. Hebbian synaptic plasticity is thought to be one mechanism by which correlations in spiking promote assembly formation during learning. While spike timing dependent plasticity (STDP) rules for excitatory synapses have been well characterized, inhibitory STDP rules remain incomplete, particularly with respect to sub-classes of inhibitory interneurons. Here, we report that in layer 2/3 of the orbitofrontal cortex of mice, inhibition from parvalbumin (PV) interneurons onto excitatory (E) neurons follows a symmetric STDP function and mediates homeostasis in E-neuron firing rates. However, inhibition from somatostatin (SOM) interneurons follows an asymmetric, Hebbian STDP rule. We incorporate these findings in both large scale simulations and mean-field models to investigate how these differences in plasticity impact network dynamics and assembly formation. We find that plasticity of SOM inhibition builds lateral inhibitory connections and increases competition between assemblies. This is reflected in amplified correlations between neurons within assembly and anti-correlations between assemblies. An additional finding is that the emergence of tuned PV inhibition depends on the interaction between SOM and PV STDP rules. Altogether, we show that incorporation of differential inhibitory STDP rules promotes assembly formation through competition, while enhanced inhibition both within and between assemblies protects new representations from degradation after the training input is removed.


Author(s):  
Gonzalo Marcelo Ramírez-Ávila ◽  
Stéphanie Depickère ◽  
Imre M. Jánosi ◽  
Jason A. C. Gallas

AbstractLarge-scale brain simulations require the investigation of large networks of realistic neuron models, usually represented by sets of differential equations. Here we report a detailed fine-scale study of the dynamical response over extended parameter ranges of a computationally inexpensive model, the two-dimensional Rulkov map, which reproduces well the spiking and spiking-bursting activity of real biological neurons. In addition, we provide evidence of the existence of nested arithmetic progressions among periodic pulsing and bursting phases of Rulkov’s neuron. We find that specific remarkably complex nested sequences of periodic neural oscillations can be expressed as simple linear combinations of pairs of certain basal periodicities. Moreover, such nested progressions are robust and can be observed abundantly in diverse control parameter planes which are described in detail. We believe such findings to add significantly to the knowledge of Rulkov neuron dynamics and to be potentially helpful in large-scale simulations of the brain and other complex neuron networks.


Author(s):  
Jian Tao ◽  
Werner Benger ◽  
Kelin Hu ◽  
Edwin Mathews ◽  
Marcel Ritter ◽  
...  

Author(s):  
Stefano Vassanelli

Establishing direct communication with the brain through physical interfaces is a fundamental strategy to investigate brain function. Starting with the patch-clamp technique in the seventies, neuroscience has moved from detailed characterization of ionic channels to the analysis of single neurons and, more recently, microcircuits in brain neuronal networks. Development of new biohybrid probes with electrodes for recording and stimulating neurons in the living animal is a natural consequence of this trend. The recent introduction of optogenetic stimulation and advanced high-resolution large-scale electrical recording approaches demonstrates this need. Brain implants for real-time neurophysiology are also opening new avenues for neuroprosthetics to restore brain function after injury or in neurological disorders. This chapter provides an overview on existing and emergent neurophysiology technologies with particular focus on those intended to interface neuronal microcircuits in vivo. Chemical, electrical, and optogenetic-based interfaces are presented, with an analysis of advantages and disadvantages of the different technical approaches.


Author(s):  
Hugues Duffau

Investigating the neural and physiological basis of language is one of the most important challenges in neurosciences. Direct electrical stimulation (DES), usually performed in awake patients during surgery for cerebral lesions, is a reliable tool for detecting both cortical and subcortical (white matter and deep grey nuclei) regions crucial for cognitive functions, especially language. DES transiently interacts locally with a small cortical or axonal site, but also nonlocally, as the focal perturbation will disrupt the entire subnetwork sustaining a given function. Thus, in contrast to functional neuroimaging, DES represents a unique opportunity to identify with great accuracy and reproducibility, in vivo in humans, the structures that are actually indispensable to the function, by inducing a transient virtual lesion based on the inhibition of a subcircuit lasting a few seconds. Currently, this is the sole technique that is able to directly investigate the functional role of white matter tracts in humans. Thus, combining transient disturbances elicited by DES with the anatomical data provided by pre- and postoperative MRI enables to achieve reliable anatomo-functional correlations, supporting a network organization of the brain, and leading to the reappraisal of models of language representation. Finally, combining serial peri-operative functional neuroimaging and online intraoperative DES allows the study of mechanisms underlying neuroplasticity. This chapter critically reviews the basic principles of DES, its advantages and limitations, and what DES can reveal about the neural foundations of language, that is, the large-scale distribution of language areas in the brain, their connectivity, and their ability to reorganize.


Author(s):  
Pooja Prabhu ◽  
A. K. Karunakar ◽  
Sanjib Sinha ◽  
N. Mariyappa ◽  
G. K. Bhargava ◽  
...  

AbstractIn a general scenario, the brain images acquired from magnetic resonance imaging (MRI) may experience tilt, distorting brain MR images. The tilt experienced by the brain MR images may result in misalignment during image registration for medical applications. Manually correcting (or estimating) the tilt on a large scale is time-consuming, expensive, and needs brain anatomy expertise. Thus, there is a need for an automatic way of performing tilt correction in three orthogonal directions (X, Y, Z). The proposed work aims to correct the tilt automatically by measuring the pitch angle, yaw angle, and roll angle in X-axis, Z-axis, and Y-axis, respectively. For correction of the tilt around the Z-axis (pointing to the superior direction), image processing techniques, principal component analysis, and similarity measures are used. Also, for correction of the tilt around the X-axis (pointing to the right direction), morphological operations, and tilt correction around the Y-axis (pointing to the anterior direction), orthogonal regression is used. The proposed approach was applied to adjust the tilt observed in the T1- and T2-weighted MR images. The simulation study with the proposed algorithm yielded an error of 0.40 ± 0.09°, and it outperformed the other existing studies. The tilt angle (in degrees) obtained is ranged from 6.2 ± 3.94, 2.35 ± 2.61, and 5 ± 4.36 in X-, Z-, and Y-directions, respectively, by using the proposed algorithm. The proposed work corrects the tilt more accurately and robustly when compared with existing studies.


2020 ◽  
Vol 31 (6) ◽  
pp. 681-689
Author(s):  
Jalal Mirakhorli ◽  
Hamidreza Amindavar ◽  
Mojgan Mirakhorli

AbstractFunctional magnetic resonance imaging a neuroimaging technique which is used in brain disorders and dysfunction studies, has been improved in recent years by mapping the topology of the brain connections, named connectopic mapping. Based on the fact that healthy and unhealthy brain regions and functions differ slightly, studying the complex topology of the functional and structural networks in the human brain is too complicated considering the growth of evaluation measures. One of the applications of irregular graph deep learning is to analyze the human cognitive functions related to the gene expression and related distributed spatial patterns. Since a variety of brain solutions can be dynamically held in the neuronal networks of the brain with different activity patterns and functional connectivity, both node-centric and graph-centric tasks are involved in this application. In this study, we used an individual generative model and high order graph analysis for the region of interest recognition areas of the brain with abnormal connection during performing certain tasks and resting-state or decompose irregular observations. Accordingly, a high order framework of Variational Graph Autoencoder with a Gaussian distributer was proposed in the paper to analyze the functional data in brain imaging studies in which Generative Adversarial Network is employed for optimizing the latent space in the process of learning strong non-rigid graphs among large scale data. Furthermore, the possible modes of correlations were distinguished in abnormal brain connections. Our goal was to find the degree of correlation between the affected regions and their simultaneous occurrence over time. We can take advantage of this to diagnose brain diseases or show the ability of the nervous system to modify brain topology at all angles and brain plasticity according to input stimuli. In this study, we particularly focused on Alzheimer’s disease.


Sign in / Sign up

Export Citation Format

Share Document