Multidimensional Encoding Strategy of Spiking Neurons

2000 ◽  
Vol 12 (7) ◽  
pp. 1519-1529 ◽  
Author(s):  
Christian W. Eurich ◽  
Stefan D. Wilke

Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of narrow tuning in the dimension to be encoded, to increase the single-neuron Fisher information, and broad tuning in all other dimensions, to increase the number of active neurons. Extremely narrow tuning without sufficient receptive field overlap will severely worsen the coding. This implies the existence of an optimal tuning width for the feature to be encoded. Empirically, only a subset of all stimulus features will normally be accessible. In this case, relative encoding errors can be calculated that yield a criterion for the function of a neural population based on the measured tuning curves.

2006 ◽  
Vol 18 (7) ◽  
pp. 1555-1576 ◽  
Author(s):  
Marcelo A. Montemurro ◽  
Stefano Panzeri

We study the relationship between the accuracy of a large neuronal population in encoding periodic sensory stimuli and the width of the tuning curves of individual neurons in the population. By using general simple models of population activity, we show that when considering one or two periodic stimulus features, a narrow tuning width provides better population encoding accuracy. When encoding more than two periodic stimulus features, the information conveyed by the population is instead maximal for finite values of the tuning width. These optimal values are only weakly dependent on model parameters and are similar to the width of tuning to orientation ormotion direction of real visual cortical neurons. A very large tuning width leads to poor encoding accuracy, whatever the number of stimulus features encoded. Thus, optimal coding of periodic stimuli is different from that of nonperiodic stimuli, which, as shown in previous studies, would require infinitely large tuning widths when coding more than two stimulus features.


2001 ◽  
Vol 13 (9) ◽  
pp. 2031-2047 ◽  
Author(s):  
Hiroyuki Nakahara ◽  
Si Wu ◽  
Shun-ichi Amari

This study investigates the influence of attention modulation on neural tuning functions. It has been shown in experiments that attention modulation alters neural tuning curves. Attention has been considered at least to serve to resolve limiting capacities and to increase the sensitivity to attended stimulus, while the exact functions of attention are still under debate. Inspired by recent experimental results on attention modulation, we investigate the influence of changes in the height and base rate of the tuning curve on the encoding accuracy, using the Fisher information. Under an assumption of stimulus-conditional independence of neural responses, we derive explicit conditions that determine when the height and base rate should be increased or decreased to improve encoding accuracy. Notably, a decrease in the tuning height and base rate can improve the encoding accuracy in some cases. Our theoretical results can predict the effective size of attention modulation on the neural population with respect to encoding accuracy. We discuss how our method can be used quantitatively to evaluate different aspects of attention function.


2002 ◽  
Vol 14 (1) ◽  
pp. 155-189 ◽  
Author(s):  
Stefan D. Wilke ◽  
Christian W. Eurich

Fisher information is used to analyze the accuracy with which a neural population encodes D stimulus features. It turns out that the form of response variability has a major impact on the encoding capacity and therefore plays an important role in the selection of an appropriate neural model. In particular, in the presence of baseline firing, the reconstruction error rapidly increases with D in the case of Poissonian noise but not for additive noise. The existence of limited-range correlations of the type found in cortical tissue yields a saturation of the Fisher information content as a function of the population size only for an additive noise model. We also show that random variability in the correlation coefficient within a neural population, as found empirically, considerably improves the average encoding quality. Finally, the representational accuracy of populations with inhomogeneous tuning properties, either with variability in the tuning widths or fragmented into specialized subpopulations, is superior to the case of identical and radially symmetric tuning curves usually considered in the literature.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jermyn Z. See ◽  
Natsumi Y. Homma ◽  
Craig A. Atencio ◽  
Vikaas S. Sohal ◽  
Christoph E. Schreiner

AbstractNeuronal activity in auditory cortex is often highly synchronous between neighboring neurons. Such coordinated activity is thought to be crucial for information processing. We determined the functional properties of coordinated neuronal ensembles (cNEs) within primary auditory cortical (AI) columns relative to the contributing neurons. Nearly half of AI cNEs showed robust spectro-temporal receptive fields whereas the remaining cNEs showed little or no acoustic feature selectivity. cNEs can therefore capture either specific, time-locked information of spectro-temporal stimulus features or reflect stimulus-unspecific, less-time specific processing aspects. By contrast, we show that individual neurons can represent both of those aspects through membership in multiple cNEs with either high or absent feature selectivity. These associations produce functionally heterogeneous spikes identifiable by instantaneous association with different cNEs. This demonstrates that single neuron spike trains can sequentially convey multiple aspects that contribute to cortical processing, including stimulus-specific and unspecific information.


2006 ◽  
Vol 18 (3) ◽  
pp. 660-682 ◽  
Author(s):  
Melchi M. Michel ◽  
Robert A. Jacobs

Investigators debate the extent to which neural populations use pairwise and higher-order statistical dependencies among neural responses to represent information about a visual stimulus. To study this issue, three statistical decoders were used to extract the information in the responses of model neurons about the binocular disparities present in simulated pairs of left-eye and right-eye images: (1) the full joint probability decoder considered all possible statistical relations among neural responses as potentially important; (2) the dependence tree decoder also considered all possible relations as potentially important, but it approximated high-order statistical correlations using a computationally tractable procedure; and (3) the independent response decoder, which assumed that neural responses are statistically independent, meaning that all correlations should be zero and thus can be ignored. Simulation results indicate that high-order correlations among model neuron responses contain significant information about binocular disparities and that the amount of this high-order information increases rapidly as a function of neural population size. Furthermore, the results highlight the potential importance of the dependence tree decoder to neuroscientists as a powerful but still practical way of approximating high-order correlations among neural responses.


Author(s):  
Jiaoyan Wang ◽  
Xiaoshan Zhao ◽  
Chao Lei

AbstractInputs can change timings of spikes in neurons. But it is still not clear how input’s parameters for example injecting time of inputs affect timings of neurons. HR neurons receiving both weak and strong inputs are considered. How pulse inputs affecting neurons is studied by using the phase-resetting curve technique. For a single neuron, weak pulse inputs may advance or delay the next spike, while strong pulse inputs may induce subthreshold oscillations depending on parameters such as injecting timings of inputs. The behavior of synchronization in a network with or without coupling delays can be predicted by analysis in a single neuron. Our results can be used to predict the effects of inputs on other spiking neurons.


2014 ◽  
Vol 112 (6) ◽  
pp. 1584-1598 ◽  
Author(s):  
Marino Pagan ◽  
Nicole C. Rust

The responses of high-level neurons tend to be mixtures of many different types of signals. While this diversity is thought to allow for flexible neural processing, it presents a challenge for understanding how neural responses relate to task performance and to neural computation. To address these challenges, we have developed a new method to parse the responses of individual neurons into weighted sums of intuitive signal components. Our method computes the weights by projecting a neuron's responses onto a predefined orthonormal basis. Once determined, these weights can be combined into measures of signal modulation; however, in their raw form these signal modulation measures are biased by noise. Here we introduce and evaluate two methods for correcting this bias, and we report that an analytically derived approach produces performance that is robust and superior to a bootstrap procedure. Using neural data recorded from inferotemporal cortex and perirhinal cortex as monkeys performed a delayed-match-to-sample target search task, we demonstrate how the method can be used to quantify the amounts of task-relevant signals in heterogeneous neural populations. We also demonstrate how these intuitive quantifications of signal modulation can be related to single-neuron measures of task performance ( d′).


2021 ◽  
Author(s):  
Dean Pospisil ◽  
Wyeth A Bair

The Pearson correlation coefficient squared, r2, is often used in the analysis of neural data to estimate the relationship between neural tuning curves. Yet this metric is biased by trial-to-trial variability: as trial-to-trial variability increases, measured correlation decreases. Major lines of research are confounded by this bias, including the study of invariance of neural tuning across conditions and the similarity of tuning across neurons. To address this, we extend the estimator, r̂2ER, developed for estimating model-to-neuron correlation to the neuron-to-neuron case. We compare the estimator to a prior method developed by Spearman, commonly used in other fields but widely overlooked in neuroscience, and find that our method has less bias. We then apply our estimator to the study of two forms of invariance and demonstrate how it avoids drastic confounds introduced by trial-to-trial variability.


2016 ◽  
Vol 115 (1) ◽  
pp. 457-469 ◽  
Author(s):  
Mahmood S. Hoseini ◽  
Ralf Wessel

Local field potential (LFP) recordings from spatially distant cortical circuits reveal episodes of coherent gamma oscillations that are intermittent, and of variable peak frequency and duration. Concurrently, single neuron spiking remains largely irregular and of low rate. The underlying potential mechanisms of this emergent network activity have long been debated. Here we reproduce such intermittent ensemble oscillations in a model network, consisting of excitatory and inhibitory model neurons with the characteristics of regular-spiking (RS) pyramidal neurons, and fast-spiking (FS) and low-threshold spiking (LTS) interneurons. We find that fluctuations in the external inputs trigger reciprocally connected and irregularly spiking RS and FS neurons in episodes of ensemble oscillations, which are terminated by the recruitment of the LTS population with concurrent accumulation of inhibitory conductance in both RS and FS neurons. The model qualitatively reproduces experimentally observed phase drift, oscillation episode duration distributions, variation in the peak frequency, and the concurrent irregular single-neuron spiking at low rate. Furthermore, consistent with previous experimental studies using optogenetic manipulation, periodic activation of FS, but not RS, model neurons causes enhancement of gamma oscillations. In addition, increasing the coupling between two model networks from low to high reveals a transition from independent intermittent oscillations to coherent intermittent oscillations. In conclusion, the model network suggests biologically plausible mechanisms for the generation of episodes of coherent intermittent ensemble oscillations with irregular spiking neurons in cortical circuits.


1998 ◽  
Vol 10 (6) ◽  
pp. 1567-1586 ◽  
Author(s):  
Terence David Sanger

This article proposes a new method for interpreting computations performed by populations of spiking neurons. Neural firing is modeled as a rate-modulated random process for which the behavior of a neuron in response to external input can be completely described by its tuning function. I show that under certain conditions, cells with any desired tuning functions can be approximated using only spike coincidence detectors and linear operations on the spike output of existing cells. I show examples of adaptive algorithms based on only spike data that cause the underlying cell-tuning curves to converge according to standard supervised and unsupervised learning algorithms. Unsupervised learning based on principal components analysis leads to independent cell spike trains. These results suggest a duality relationship between the random discrete behavior of spiking cells and the deterministic smooth behavior of their tuning functions. Classical neural network approximation methods and learning algorithms based on continuous variables can thus be implemented within networks of spiking neurons without the need to make numerical estimates of the intermediate cell firing rates.


Sign in / Sign up

Export Citation Format

Share Document