Capacity of a Single Spiking Neuron Channel

2009 ◽  
Vol 21 (6) ◽  
pp. 1714-1748 ◽  
Author(s):  
Shiro Ikeda ◽  
Jonathan H. Manton

Information transfer through a single neuron is a fundamental component of information processing in the brain, and computing the information channel capacity is important to understand this information processing. The problem is difficult since the capacity depends on coding, characteristics of the communication channel, and optimization over input distributions, among other issues. In this letter, we consider two models. The temporal coding model of a neuron as a communication channel assumes the output is τ where τ is a gamma-distributed random variable corresponding to the interspike interval, that is, the time it takes for the neuron to fire once. The rate coding model is similar; the output is the actual rate of firing over a fixed period of time. Theoretical studies prove that the distribution of inputs, which achieves channel capacity, is a discrete distribution with finite mass points for temporal and rate coding under a reasonable assumption. This allows us to compute numerically the capacity of a neuron. Numerical results are in a plausible range based on biological evidence to date.

2019 ◽  
Author(s):  
Mike Li ◽  
Yinuo Han ◽  
Matthew J. Aburn ◽  
Michael Breakspear ◽  
Russell A. Poldrack ◽  
...  

AbstractA key component of the flexibility and complexity of the brain is its ability to dynamically adapt its functional network structure between integrated and segregated brain states depending on the demands of different cognitive tasks. Integrated states are prevalent when performing tasks of high complexity, such as maintaining items in working memory, consistent with models of a global workspace architecture. Recent work has suggested that the balance between integration and segregation is under the control of ascending neuromodulatory systems, such as the noradrenergic system. In a previous large-scale nonlinear oscillator model of neuronal network dynamics, we showed that manipulating neural gain led to a ‘critical’ transition in phase synchrony that was associated with a shift from segregated to integrated topology, thus confirming our original prediction. In this study, we advance these results by demonstrating that the gain-mediated phase transition is characterized by a shift in the underlying dynamics of neural information processing. Specifically, the dynamics of the subcritical (segregated) regime are dominated by information storage, whereas the supercritical (integrated) regime is associated with increased information transfer (measured via transfer entropy). Operating near to the critical regime with respect to modulating neural gain would thus appear to provide computational advantages, offering flexibility in the information processing that can be performed with only subtle changes in gain control. Our results thus link studies of whole-brain network topology and the ascending arousal system with information processing dynamics, and suggest that the constraints imposed by the ascending arousal system constrain low-dimensional modes of information processing within the brain.Author summaryHigher brain function relies on a dynamic balance between functional integration and segregation. Previous work has shown that this balance is mediated in part by alterations in neural gain, which are thought to relate to projections from ascending neuromodulatory nuclei, such as the locus coeruleus. Here, we extend this work by demonstrating that the modulation of neural gain alters the information processing dynamics of the neural components of a biophysical neural model. Specifically, we find that low levels of neural gain are characterized by high Active Information Storage, whereas higher levels of neural gain are associated with an increase in inter-regional Transfer Entropy. Our results suggest that the modulation of neural gain via the ascending arousal system may fundamentally alter the information processing mode of the brain, which in turn has important implications for understanding the biophysical basis of cognition.


Author(s):  
Ivan Mysin ◽  
Liubov Shubina

The brain rhythms are essential for information processing in neuronal networks. Oscillations recorded in different brain regions can be synchronized and have a constant phase difference, i.e. be coherent. Coherence between local field potential (LFP) signals from different regions in the brain may be correlated with the performance of cognitive tasks, from which it is concluded that these regions of the brain are involved in the task performance together. In this review, we discuss why coherence occurs and how it is coupled to the information transfer between different regions of the hippocampal formation. Coherence in theta and gamma frequency ranges is described since these rhythms are most pronounced during the hippocampus-dependent attention and memory. We review in vivo studies of interactions between different regions of the hippocampal formation in theta and gamma frequency bands. The kay provisions of the review: 1) coherence emerges from synchronous postsynaptic currents in principal neurons, occurring as a result of synchronization of neuronal spike activity; 2) synchronization of neuronal spike patterns in two regions of the hippocampal formation can be realised through induction or resonance; 3) coherence at a specific time point reflects the transfer of information between regions of the hippocampal formation, in particular, gamma coherence reflects the coupling of active neuronal ensembles. Overall, coherence is not an epiphenomenon, but an important physiological process that has certain generation mechanisms and performs important functions in information processing and transmission across the brain regions.


Author(s):  
D. Van Dyck

An (electron) microscope can be considered as a communication channel that transfers structural information between an object and an observer. In electron microscopy this information is carried by electrons. According to the theory of Shannon the maximal information rate (or capacity) of a communication channel is given by C = B log2 (1 + S/N) bits/sec., where B is the band width, and S and N the average signal power, respectively noise power at the output. We will now apply to study the information transfer in an electron microscope. For simplicity we will assume the object and the image to be onedimensional (the results can straightforwardly be generalized). An imaging device can be characterized by its transfer function, which describes the magnitude with which a spatial frequency g is transferred through the device, n is the noise. Usually, the resolution of the instrument ᑭ is defined from the cut-off 1/ᑭ beyond which no spadal information is transferred.


Entropy ◽  
2021 ◽  
Vol 23 (2) ◽  
pp. 228
Author(s):  
Sze-Ying Lam ◽  
Alexandre Zénon

Previous investigations concluded that the human brain’s information processing rate remains fundamentally constant, irrespective of task demands. However, their conclusion rested in analyses of simple discrete-choice tasks. The present contribution recasts the question of human information rate within the context of visuomotor tasks, which provides a more ecologically relevant arena, albeit a more complex one. We argue that, while predictable aspects of inputs can be encoded virtually free of charge, real-time information transfer should be identified with the processing of surprises. We formalise this intuition by deriving from first principles a decomposition of the total information shared by inputs and outputs into a feedforward, predictive component and a feedback, error-correcting component. We find that the information measured by the feedback component, a proxy for the brain’s information processing rate, scales with the difficulty of the task at hand, in agreement with cost-benefit models of cognitive effort.


1983 ◽  
Vol 17 (4) ◽  
pp. 307-318 ◽  
Author(s):  
H. G. Stampfer

This article suggests that the potential usefulness of event-related potentials in psychiatry has not been fully explored because of the limitations of various approaches to research adopted to date, and because the field is still undergoing rapid development. Newer approaches to data acquisition and methods of analysis, combined with closer co-operation between medical and physical scientists, will help to establish the practical application of these signals in psychiatric disorders and assist our understanding of psychophysiological information processing in the brain. Finally, it is suggested that psychiatrists should seek to understand these techniques and the data they generate, since they provide more direct access to measures of complex cerebral processes than current clinical methods.


2013 ◽  
Vol 310 ◽  
pp. 660-664 ◽  
Author(s):  
Zi Guang Li ◽  
Guo Zhong Liu

As an emerging technology, brain-computer interface (BCI) bring us a novel communication channel which translate brain activities into command signals for devices like computer, prosthesis, robots, and so forth. The aim of the brain-computer interface research is to improve the quality life of patients who are suffering from server neuromuscular disease. This paper focus on analyzing the different characteristics of the brainwaves when a subject responses “yes” or “no” to auditory stimulation questions. The experiment using auditory stimuli of form of asking questions is adopted. The extraction of the feature adopted the method of common spatial patterns(CSP) and the classification used support vector machine (SVM) . The classification accuracy of "yes" and "no" answers achieves 80.2%. The experiment result shows the feasibility and effectiveness of this solution and provides a basis for advanced research .


2005 ◽  
Vol 17 (10) ◽  
pp. 2139-2175 ◽  
Author(s):  
Naoki Masuda ◽  
Brent Doiron ◽  
André Longtin ◽  
Kazuyuki Aihara

Oscillatory and synchronized neural activities are commonly found in the brain, and evidence suggests that many of them are caused by global feedback. Their mechanisms and roles in information processing have been discussed often using purely feedforward networks or recurrent networks with constant inputs. On the other hand, real recurrent neural networks are abundant and continually receive information-rich inputs from the outside environment or other parts of the brain. We examine how feedforward networks of spiking neurons with delayed global feedback process information about temporally changing inputs. We show that the network behavior is more synchronous as well as more correlated with and phase-locked to the stimulus when the stimulus frequency is resonant with the inherent frequency of the neuron or that of the network oscillation generated by the feedback architecture. The two eigenmodes have distinct dynamical characteristics, which are supported by numerical simulations and by analytical arguments based on frequency response and bifurcation theory. This distinction is similar to the class I versus class II classification of single neurons according to the bifurcation from quiescence to periodic firing, and the two modes depend differently on system parameters. These two mechanisms may be associated with different types of information processing.


Sign in / Sign up

Export Citation Format

Share Document