scholarly journals Computation of the electroencephalogram (EEG) from network models of point neurons

2020 ◽  
Author(s):  
Pablo Martínez-Cañada ◽  
Torbjørn V. Ness ◽  
Gaute T. Einevoll ◽  
Tommaso Fellin ◽  
Stefano Panzeri

AbstractThe electroencephalogram (EEG) is one of the main tools for non-invasively studying brain function and dysfunction. To better interpret EEGs in terms of neural mechanisms, it is important to compare experimentally recorded EEGs with the output of neural network models. Most current neural network models use networks of simple point neurons. They capture important properties of cortical dynamics, and are numerically or analytically tractable. However, point neuron networks cannot directly generate an EEG, since EEGs are generated by spatially separated transmembrane currents. Here, we explored how to compute an accurate approximation of the EEG with a combination of quantities defined in point-neuron network models. We constructed several different candidate approximations (or proxies) of the EEG that can be computed from networks of leaky integrate-and-fire (LIF) point neurons, such as firing rates, membrane potentials, and specific combinations of synaptic currents. We then evaluated how well each proxy reconstructed a realistic ground-truth EEG obtained when the synaptic input currents of the LIF network were fed into a three-dimensional (3D) network model of multi-compartmental neurons with realistic cell morphologies. We found that a new class of proxies, based on an optimized linear combination of time-shifted AMPA and GABA currents, provided the most accurate estimate of the EEG over a wide range of network states of the LIF point-neuron network. The new linear proxies explained most of the variance (85-95%) of the ground-truth EEG for a wide range of cell morphologies, distributions of presynaptic inputs, and position of the recording electrode. Non-linear proxies, obtained using a convolutional neural network (CNN) to predict the EEG from synaptic currents, increased proxy performance by a further 2-8%. Our proxies can be used to easily calculate a biologically realistic EEG signal directly from point-neuron simulations and thereby allow a quantitative comparison between computational models and experimental EEG recordings.Author summaryNetworks of point neurons are widely used to model neural dynamics. Their output, however, cannot be directly compared to the electroencephalogram (EEG), which is one of the most used tools to non-invasively measure brain activity. To allow a direct integration between neural network theory and empirical EEG data, here we derived a new mathematical expression, termed EEG proxy, which estimates with high accuracy the EEG based simply on the variables available from simulations of point-neuron network models. To compare and validate these EEG proxies, we computed a realistic ground-truth EEG produced by a network of simulated neurons with realistic 3D morphologies that receive the same spikes of the simpler network of point neurons. The new obtained EEG proxies outperformed previous approaches and worked well under a wide range of simulated configurations of cell morphologies, distribution of presynaptic inputs, and position of the recording electrode. The new proxies approximated well both EEG spectra and EEG evoked potentials. Our work provides important mathematical tools that allow a better interpretation of experimentally measured EEGs in terms of neural models of brain function.

2021 ◽  
Vol 17 (4) ◽  
pp. e1008893
Author(s):  
Pablo Martínez-Cañada ◽  
Torbjørn V. Ness ◽  
Gaute T. Einevoll ◽  
Tommaso Fellin ◽  
Stefano Panzeri

The electroencephalogram (EEG) is a major tool for non-invasively studying brain function and dysfunction. Comparing experimentally recorded EEGs with neural network models is important to better interpret EEGs in terms of neural mechanisms. Most current neural network models use networks of simple point neurons. They capture important properties of cortical dynamics, and are numerically or analytically tractable. However, point neurons cannot generate an EEG, as EEG generation requires spatially separated transmembrane currents. Here, we explored how to compute an accurate approximation of a rodent’s EEG with quantities defined in point-neuron network models. We constructed different approximations (or proxies) of the EEG signal that can be computed from networks of leaky integrate-and-fire (LIF) point neurons, such as firing rates, membrane potentials, and combinations of synaptic currents. We then evaluated how well each proxy reconstructed a ground-truth EEG obtained when the synaptic currents of the LIF model network were fed into a three-dimensional network model of multicompartmental neurons with realistic morphologies. Proxies based on linear combinations of AMPA and GABA currents performed better than proxies based on firing rates or membrane potentials. A new class of proxies, based on an optimized linear combination of time-shifted AMPA and GABA currents, provided the most accurate estimate of the EEG over a wide range of network states. The new linear proxies explained 85–95% of the variance of the ground-truth EEG for a wide range of network configurations including different cell morphologies, distributions of presynaptic inputs, positions of the recording electrode, and spatial extensions of the network. Non-linear EEG proxies using a convolutional neural network (CNN) on synaptic currents increased proxy performance by a further 2–8%. Our proxies can be used to easily calculate a biologically realistic EEG signal directly from point-neuron simulations thus facilitating a quantitative comparison between computational models and experimental EEG recordings.


2017 ◽  
Author(s):  
Charlie W. Zhao ◽  
Mark J. Daley ◽  
J. Andrew Pruszynski

AbstractFirst-order tactile neurons have spatially complex receptive fields. Here we use machine learning tools to show that such complexity arises for a wide range of training sets and network architectures, and benefits network performance, especially on more difficult tasks and in the presence of noise. Our work suggests that spatially complex receptive fields are normatively good given the biological constraints of the tactile periphery.


2020 ◽  
Author(s):  
Yinghao Li ◽  
Robert Kim ◽  
Terrence J. Sejnowski

SummaryRecurrent neural network (RNN) model trained to perform cognitive tasks is a useful computational tool for understanding how cortical circuits execute complex computations. However, these models are often composed of units that interact with one another using continuous signals and overlook parameters intrinsic to spiking neurons. Here, we developed a method to directly train not only synaptic-related variables but also membrane-related parameters of a spiking RNN model. Training our model on a wide range of cognitive tasks resulted in diverse yet task-specific synaptic and membrane parameters. We also show that fast membrane time constants and slow synaptic decay dynamics naturally emerge from our model when it is trained on tasks associated with working memory (WM). Further dissecting the optimized parameters revealed that fast membrane properties and slow synaptic dynamics are important for encoding stimuli and WM maintenance, respectively. This approach offers a unique window into how connectivity patterns and intrinsic neuronal properties contribute to complex dynamics in neural populations.


1995 ◽  
Vol 38 (4) ◽  
pp. 483-495 ◽  
Author(s):  
William Sims Bainbridge

This paper applies neural network technology, a standard approach in computer science that has been unaccountably ignored by sociologists, to the problem of developing rigorous sociological theories. A simulation program employing a “varimax” model of human learning and decision-making models central elements of the Stark-Bainbridge theory of religion. Individuals in a micro-society of 24 simulated people learn which categories of potential exchange partners to seek for each of four material rewards which in fact can be provided by other actors in the society. However, when they seek eternal life, they are unable to find suitable human exchange partners who can provide it to them, so they postulate the existence of supernatural exchange partners as substitutes. The explanation of how the particular neural net works, including reference to modulo arithmetic, introduces some aspects of this new technology to sociology, and this paper invites readers to explore the wide range of other neural net techniques that may be of value for social scientists


Symmetry ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 1756
Author(s):  
Zhe Li ◽  
Mieradilijiang Maimaiti ◽  
Jiabao Sheng ◽  
Zunwang Ke ◽  
Wushour Silamu ◽  
...  

The task of dialogue generation has attracted increasing attention due to its diverse downstream applications, such as question-answering systems and chatbots. Recently, the deep neural network (DNN)-based dialogue generation models have achieved superior performance against conventional models utilizing statistical machine learning methods. However, despite that an enormous number of state-of-the-art DNN-based models have been proposed, there lacks detailed empirical comparative analysis for them on the open Chinese corpus. As a result, relevant researchers and engineers might find it hard to get an intuitive understanding of the current research progress. To address this challenge, we conducted an empirical study for state-of-the-art DNN-based dialogue generation models in various Chinese corpora. Specifically, extensive experiments were performed on several well-known single-turn and multi-turn dialogue corpora, including KdConv, Weibo, and Douban, to evaluate a wide range of dialogue generation models that are based on the symmetrical architecture of Seq2Seq, RNNSearch, transformer, generative adversarial nets, and reinforcement learning respectively. Moreover, we paid special attention to the prevalent pre-trained model for the quality of dialogue generation. Their performances were evaluated by four widely-used metrics in this area: BLEU, pseudo, distinct, and rouge. Finally, we report a case study to show example responses generated by these models separately.


2019 ◽  
Author(s):  
Yue Liu ◽  
Marc W. Howard

AbstractSequential neural activity has been observed in many parts of the brain and has been proposed as a neural mechanism for memory. The natural world expresses temporal relationships at a wide range of scales. Because we cannot know the relevant scales a priori it is desirable that memory, and thus the generated sequences, are scale-invariant. Although recurrent neural network models have been proposed as a mechanism for generating sequences, the requirements for scale-invariant sequences are not known. This paper reports the constraints that enable a linear recurrent neural network model to generate scale-invariant sequential activity. A straightforward eigendecomposition analysis results in two independent conditions that are required for scaleinvariance for connectivity matrices with real, distinct eigenvalues. First, the eigenvalues of the network must be geometrically spaced. Second, the eigenvectors must be related to one another via translation. These constraints are easily generalizable for matrices that have complex and distinct eigenvalues. Analogous albeit less compact constraints hold for matrices with degenerate eigenvalues. These constraints, along with considerations on initial conditions, provide a general recipe to build linear recurrent neural networks that support scale-invariant sequential activity.


The ICL Distributed Array Processor and the Meiko Computing Surface have been successfully applied to a wide range of scientific problems. I give an overview of selected applications from experimental data analysis, molecular dynamics and Monte Carlo simulation, cellular automata for fluid flow, neural network models, protein sequencing and NMR imaging. I expose the problems and advantages of implementations on the two architectures, and discuss the general conclusions which one can draw from experience so far.


2020 ◽  
Vol 32 (7) ◽  
pp. 1379-1407
Author(s):  
Yue Liu ◽  
Marc W. Howard

Sequential neural activity has been observed in many parts of the brain and has been proposed as a neural mechanism for memory. The natural world expresses temporal relationships at a wide range of scales. Because we cannot know the relevant scales a priori, it is desirable that memory, and thus the generated sequences, is scale invariant. Although recurrent neural network models have been proposed as a mechanism for generating sequences, the requirements for scale-invariant sequences are not known. This letter reports the constraints that enable a linear recurrent neural network model to generate scale-invariant sequential activity. A straightforward eigendecomposition analysis results in two independent conditions that are required for scale invariance for connectivity matrices with real, distinct eigenvalues. First, the eigenvalues of the network must be geometrically spaced. Second, the eigenvectors must be related to one another via translation. These constraints are easily generalizable for matrices that have complex and distinct eigenvalues. Analogous albeit less compact constraints hold for matrices with degenerate eigenvalues. These constraints, along with considerations on initial conditions, provide a general recipe to build linear recurrent neural networks that support scale-invariant sequential activity.


Sign in / Sign up

Export Citation Format

Share Document