scholarly journals Optimal Tuning Widths in Population Coding of Periodic Variables

2006 ◽  
Vol 18 (7) ◽  
pp. 1555-1576 ◽  
Author(s):  
Marcelo A. Montemurro ◽  
Stefano Panzeri

We study the relationship between the accuracy of a large neuronal population in encoding periodic sensory stimuli and the width of the tuning curves of individual neurons in the population. By using general simple models of population activity, we show that when considering one or two periodic stimulus features, a narrow tuning width provides better population encoding accuracy. When encoding more than two periodic stimulus features, the information conveyed by the population is instead maximal for finite values of the tuning width. These optimal values are only weakly dependent on model parameters and are similar to the width of tuning to orientation ormotion direction of real visual cortical neurons. A very large tuning width leads to poor encoding accuracy, whatever the number of stimulus features encoded. Thus, optimal coding of periodic stimuli is different from that of nonperiodic stimuli, which, as shown in previous studies, would require infinitely large tuning widths when coding more than two stimulus features.

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Evan H Lyall ◽  
Daniel P Mossing ◽  
Scott R Pluta ◽  
Yun Wen Chu ◽  
Amir Dudai ◽  
...  

How cortical circuits build representations of complex objects is poorly understood. Individual neurons must integrate broadly over space, yet simultaneously obtain sharp tuning to specific global stimulus features. Groups of neurons identifying different global features must then assemble into a population that forms a comprehensive code for these global stimulus properties. Although the logic for how single neurons summate over their spatial inputs has been well-explored in anesthetized animals, how large groups of neurons compose a flexible population code of higher order features in awake animals is not known. To address this question, we probed the integration and population coding of higher order stimuli in the somatosensory and visual cortices of awake mice using two-photon calcium imaging across cortical layers. We developed a novel tactile stimulator that allowed the precise measurement of spatial summation even in actively whisking mice. Using this system, we found a sparse but comprehensive population code for higher order tactile features that depends on a heterogeneous and neuron-specific logic of spatial summation beyond the receptive field. Different somatosensory cortical neurons summed specific combinations of sensory inputs supra-linearly, but integrated other inputs sub-linearly, leading to selective responses to higher order features. Visual cortical populations employed a nearly identical scheme to generate a comprehensive population code for contextual stimuli. These results suggest that a heterogeneous logic of input-specific supra-linear summation may represent a widespread cortical mechanism for the synthesis of sparse higher order feature codes in neural populations. This may explain how the brain exploits the thalamocortical expansion of dimensionality to encode arbitrary complex features of sensory stimuli.


2020 ◽  
Author(s):  
Evan H. Lyall ◽  
Daniel P. Mossing ◽  
Scott R. Pluta ◽  
Amir Dudai ◽  
Hillel Adesnik

AbstractHow cortical circuits build representations of complex objects is poorly understood. The massive dimensional expansion from the thalamus to the primary sensory cortex may enable sparse, comprehensive representations of higher order features to facilitate object identification. To generate such a code, cortical neurons must integrate broadly over space, yet simultaneously obtain sharp tuning to specific stimulus features. The logic of cortical integration that may synthesize such a sparse, high dimensional code for complex features is not known. To address this question, we probed the integration and population coding of higher order stimuli in the somatosensory and visual cortices of awake mice using two-photon calcium imaging across cortical layers. We found that somatosensory and visual cortical neurons sum highly specific combinations of sensory inputs supra-linearly, but integrate other inputs sub-linearly, leading to selective responses to higher order features. This integrative process generates a sparse, but comprehensive code for complex stimuli from the earliest stages of cortical processing. These results from multiple sensory modalities imply that input-specific supra-linear summation may represent a widespread cortical mechanism for the synthesis of higher order feature codes. This new mechanism may explain how the brain exploits the thalamocortical expansion of dimensionality to encode arbitrary complex features of sensory stimuli.


2021 ◽  
Vol 44 (1) ◽  
Author(s):  
Rainer W. Friedrich ◽  
Adrian A. Wanner

The dense reconstruction of neuronal wiring diagrams from volumetric electron microscopy data has the potential to generate fundamentally new insights into mechanisms of information processing and storage in neuronal circuits. Zebrafish provide unique opportunities for dynamical connectomics approaches that combine reconstructions of wiring diagrams with measurements of neuronal population activity and behavior. Such approaches have the power to reveal higher-order structure in wiring diagrams that cannot be detected by sparse sampling of connectivity and that is essential for neuronal computations. In the brain stem, recurrently connected neuronal modules were identified that can account for slow, low-dimensional dynamics in an integrator circuit. In the spinal cord, connectivity specifies functional differences between premotor interneurons. In the olfactory bulb, tuning-dependent connectivity implements a whitening transformation that is based on the selective suppression of responses to overrepresented stimulus features. These findings illustrate the potential of dynamical connectomics in zebrafish to analyze the circuit mechanisms underlying higher-order neuronal computations. Expected final online publication date for the Annual Review of Neuroscience, Volume 44 is July 2021. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


2017 ◽  
Author(s):  
Amy M. Ni ◽  
Douglas A. Ruff ◽  
Joshua J. Alberts ◽  
Jen Symmonds ◽  
Marlene R. Cohen

The trial-to-trial response variability that is shared between pairs of neurons (termed spike count correlations1 or rSC) has been the subject of many recent studies largely because it might limit the amount of information that can be encoded by neuronal populations. Spike count correlations are flexible and change depending on task demands2-7. However, the relationship between correlated variability and information coding is a matter of current debate2-14. This debate has been difficult to resolve because testing the theoretical predictions would require simultaneous recordings from an experimentally unfeasible number of neurons. We hypothesized that if correlated variability limits population coding, then spike count correlations in visual cortex should a) covary with subjects’ performance on visually guided tasks and b) lie along the dimensions in neuronal population space that contain information that is used to guide behavior. We focused on two processes that are known to improve visual performance: visual attention, which allows observers to focus on important parts of a visual scene15-17, and perceptual learning, which slowly improves observers’ ability to discriminate specific, well-practiced stimuli18-20. Both attention and learning improve performance on visually guided tasks, but the two processes operate on very different timescales and are typically studied using different perceptual tasks. Here, by manipulating attention and learning in the same task, subjects, trials, and neuronal populations, we show that there is a single, robust relationship between correlated variability in populations of visual neurons and performance on a change-detection task. We also propose an explanation for the mystery of how correlated variability might affect performance: it is oriented along the dimensions of population space used by the animal to make perceptual decisions. Our results suggest that attention and learning affect the same aspects of the neuronal population activity in visual cortex, which may be responsible for learning- and attention-related improvements in behavioral performance. More generally, our study provides a framework for leveraging the activity of simultaneously recorded populations of neurons, cognitive factors, and perceptual decisions to understand the neuronal underpinnings of behavior.


2010 ◽  
Vol 103 (6) ◽  
pp. 3123-3138 ◽  
Author(s):  
James M. G. Tsui ◽  
J. Nicholas Hunter ◽  
Richard T. Born ◽  
Christopher C. Pack

Neurons in the primate extrastriate cortex are highly selective for complex stimulus features such as faces, objects, and motion patterns. One explanation for this selectivity is that neurons in these areas carry out sophisticated computations on the outputs of lower-level areas such as primary visual cortex (V1), where neuronal selectivity is often modeled in terms of linear spatiotemporal filters. However, it has long been known that such simple V1 models are incomplete because they fail to capture important nonlinearities that can substantially alter neuronal selectivity for specific stimulus features. Thus a key step in understanding the function of higher cortical areas is the development of realistic models of their V1 inputs. We have addressed this issue by constructing a computational model of the V1 neurons that provide the strongest input to extrastriate cortical middle temporal (MT) area. We find that a modest elaboration to the standard model of V1 direction selectivity generates model neurons with strong end-stopping, a property that is also found in the V1 layers that provide input to MT. With this computational feature in place, the seemingly complex properties of MT neurons can be simulated by assuming that they perform a simple nonlinear summation of their inputs. The resulting model, which has a very small number of free parameters, can simulate many of the diverse properties of MT neurons. In particular, we simulate the invariance of MT tuning curves to the orientation and length of tilted bar stimuli, as well as the accompanying temporal dynamics. We also show how this property relates to the continuum from component to pattern selectivity observed when MT neurons are tested with plaids. Finally, we confirm several key predictions of the model by recording from MT neurons in the alert macaque monkey. Overall our results demonstrate that many of the seemingly complex computations carried out by high-level cortical neurons can in principle be understood by examining the properties of their inputs.


2006 ◽  
Vol 96 (3) ◽  
pp. 1602-1614 ◽  
Author(s):  
K. Karmeier ◽  
J. H. van Hateren ◽  
R. Kern ◽  
M. Egelhaaf

In sensory systems information is encoded by the activity of populations of neurons. To analyze the coding properties of neuronal populations sensory stimuli have usually been used that were much simpler than those encountered in real life. It has been possible only recently to stimulate visual interneurons of the blowfly with naturalistic visual stimuli reconstructed from eye movements measured during free flight. Therefore we now investigate with naturalistic optic flow the coding properties of a small neuronal population of identified visual interneurons in the blowfly, the so-called VS and HS neurons. These neurons are motion sensitive and directionally selective and are assumed to extract information about the animal's self-motion from optic flow. We could show that neuronal responses of VS and HS neurons are mainly shaped by the characteristic dynamical properties of the fly's saccadic flight and gaze strategy. Individual neurons encode information about both the rotational and the translational components of the animal's self-motion. Thus the information carried by individual neurons is ambiguous. The ambiguities can be reduced by considering neuronal population activity. The joint responses of different subpopulations of VS and HS neurons can provide unambiguous information about the three rotational and the three translational components of the animal's self-motion and also, indirectly, about the three-dimensional layout of the environment.


2000 ◽  
Vol 12 (7) ◽  
pp. 1519-1529 ◽  
Author(s):  
Christian W. Eurich ◽  
Stefan D. Wilke

Neural responses in sensory systems are typically triggered by a multitude of stimulus features. Using information theory, we study the encoding accuracy of a population of stochastically spiking neurons characterized by different tuning widths for the different features. The optimal encoding strategy for representing one feature most accurately consists of narrow tuning in the dimension to be encoded, to increase the single-neuron Fisher information, and broad tuning in all other dimensions, to increase the number of active neurons. Extremely narrow tuning without sufficient receptive field overlap will severely worsen the coding. This implies the existence of an optimal tuning width for the feature to be encoded. Empirically, only a subset of all stimulus features will normally be accessible. In this case, relative encoding errors can be calculated that yield a criterion for the function of a neural population based on the measured tuning curves.


2017 ◽  
Author(s):  
Sebastián A. Romano ◽  
Verónica Pérez-Schuster ◽  
Adrien Jouary ◽  
Alessia Candeo ◽  
Jonathan Boulanger-Weill ◽  
...  

The development of new imaging and optogenetics techniques to study the dynamics of large neuronal circuits is generating datasets of unprecedented volume and complexity, demanding the development of appropriate analysis tools. We present a tutorial for the use of a comprehensive computational toolbox for the analysis of neuronal population activity imaging. It consists of tools for image pre-processing and segmentation, estimation of significant single-neuron single-trial signals, mapping event-related neuronal responses, detection of activity-correlated neuronal clusters, exploration of population dynamics, and analysis of clusters’ features against surrogate control datasets. They are integrated in a modular and versatile processing pipeline, adaptable to different needs. The clustering module is capable of detecting flexible, dynamically activated neuronal assemblies, consistent with the distributed population coding of the brain. We demonstrate the suitability of the toolbox for a variety of calcium imaging datasets, and provide a case study to explain its implementation.


2019 ◽  
Author(s):  
Paul C. Bressloff

AbstractWe use stochastic neural field theory to analyze the stimulus-dependent tuning of neural variability in ring attractor networks. We apply perturbation methods to show how the neural field equations can be reduced to a pair of stochastic nonlinear phase equations describing the stochastic wandering of spontaneously formed tuning curves or bump solutions. These equations are analyzed using a modified version of the bivariate von Mises distribution, which is well-known in the theory of circular statistics. We first consider a single ring network and derive a simple mathematical expression that accounts for the experimentally observed bimodal (or M-shaped) tuning of neural variability. We then explore the effects of inter-network coupling on stimulus-dependent variability in a pair of ring networks. These could represent populations of cells in two different layers of a cortical hypercolumn linked via vertical synaptic connections, or two different cortical hypercolumns linked by horizontal patchy connections within the same layer. We find that neural variability can be suppressed or facilitated, depending on whether the inter-network coupling is excitatory or inhibitory, and on the relative strengths and biases of the external stimuli to the two networks. These results are consistent with the general observation that increasing the mean firing rate via external stimuli or modulating drives tends to reduce neural variability.Author SummaryA topic of considerable current interest concerns the neural mechanisms underlying the suppression of cortical variability following the onset of a stimulus. Since trial-by-trial variability and noise correlations are known to affect the information capacity of neurons, such suppression could improve the accuracy of population codes. One of the main candidate mechanisms is the suppression of noise-induced transitions between multiple attractors, as exemplified by ring attractor networks. The latter have been used to model experimentally measured stochastic tuning curves of directionally selective middle temporal (MT) neurons. In this paper we show how the stimulus-dependent tuning of neural variability in ring attractor networks can be analyzed in terms of the stochastic wandering of spontaneously formed tuning curves or bumps in a continuum neural field model. The advantage of neural fields is that one can derive explicit mathematical expressions for the second-order statistics of neural activity, and explore how this depends on important model parameters, such as the level of noise, the strength of recurrent connections, and the input contrast.


Sign in / Sign up

Export Citation Format

Share Document