Efficient Coding for Natural Images Based on the Sparseness of Neural Coding in V1 across the Stimuli

Author(s):  
Lingzhi Liao ◽  
Siwei Luo ◽  
Lianwei Zhao ◽  
Mei Tan
2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Sidney R. Lehky ◽  
Keiji Tanaka ◽  
Anne B. Sereno

AbstractWhen measuring sparseness in neural populations as an indicator of efficient coding, an implicit assumption is that each stimulus activates a different random set of neurons. In other words, population responses to different stimuli are, on average, uncorrelated. Here we examine neurophysiological data from four lobes of macaque monkey cortex, including V1, V2, MT, anterior inferotemporal cortex, lateral intraparietal cortex, the frontal eye fields, and perirhinal cortex, to determine how correlated population responses are. We call the mean correlation the pseudosparseness index, because high pseudosparseness can mimic statistical properties of sparseness without being authentically sparse. In every data set we find high levels of pseudosparseness ranging from 0.59–0.98, substantially greater than the value of 0.00 for authentic sparseness. This was true for synthetic and natural stimuli, as well as for single-electrode and multielectrode data. A model indicates that a key variable producing high pseudosparseness is the standard deviation of spontaneous activity across the population. Consistently high values of pseudosparseness in the data demand reconsideration of the sparse coding literature as well as consideration of the degree to which authentic sparseness provides a useful framework for understanding neural coding in the cortex.


2003 ◽  
Vol 13 (02) ◽  
pp. 87-91
Author(s):  
Allan Kardec Barros ◽  
Andrzej Cichocki ◽  
Noboru Ohnishi

Redundancy reduction as a form of neural coding has been since the early sixties a topic of large research interest. A number of strategies has been proposed, but the one which is attracting most attention recently assumes that this coding is carried out so that the output signals are mutually independent. In this work we go one step further and suggest an strategy to deal also with non-orthogonal signals (i.e., ''dependent'' signals). Moreover, instead of working with the usual squared error, we design a neuron where the non-linearity is operating on the error. It is computationally more economic and, importantly, the permutation/scaling problem10 is avoided. The framework is given with a biological background, as we avocate throughout the manuscript that the algorithm fits well the single neuron and redundancy reduction doctrine.5 Moreover, we show that wavelet-like receptive fields emerges from natural images processed by this algorithm.


2021 ◽  
Vol 118 (39) ◽  
pp. e2105115118
Author(s):  
Na Young Jun ◽  
Greg D. Field ◽  
John Pearson

Many sensory systems utilize parallel ON and OFF pathways that signal stimulus increments and decrements, respectively. These pathways consist of ensembles or grids of ON and OFF detectors spanning sensory space. Yet, encoding by opponent pathways raises a question: How should grids of ON and OFF detectors be arranged to optimally encode natural stimuli? We investigated this question using a model of the retina guided by efficient coding theory. Specifically, we optimized spatial receptive fields and contrast response functions to encode natural images given noise and constrained firing rates. We find that the optimal arrangement of ON and OFF receptive fields exhibits a transition between aligned and antialigned grids. The preferred phase depends on detector noise and the statistical structure of the natural stimuli. These results reveal that noise and stimulus statistics produce qualitative shifts in neural coding strategies and provide theoretical predictions for the configuration of opponent pathways in the nervous system.


2019 ◽  
Author(s):  
Carlos R. Ponce ◽  
Will Xiao ◽  
Peter F. Schade ◽  
Till S. Hartmann ◽  
Gabriel Kreiman ◽  
...  

AbstractFinding the best stimulus for a neuron is challenging because it is impossible to test all possible stimuli. Here we used a vast, unbiased, and diverse hypothesis space encoded by a generative deep neural network model to investigate neuronal selectivity in inferotemporal cortex without making any assumptions about natural features or categories. A genetic algorithm, guided by neuronal responses, searched this space for optimal stimuli. Evolved synthetic images evoked higher firing rates than even the best natural images and revealed diagnostic features, independently of category or feature selection. This approach provides a way to investigate neural selectivity in any modality that can be represented by a neural network and challenges our understanding of neural coding in visual cortex.HighlightsA generative deep neural network interacted with a genetic algorithm to evolve stimuli that maximized the firing of neurons in alert macaque inferotemporal and primary visual cortex.The evolved images activated neurons more strongly than did thousands of natural images.Distance in image space from the evolved images predicted responses of neurons to novel images.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Olivia Rose ◽  
James Johnson ◽  
Binxu Wang ◽  
Carlos R. Ponce

AbstractEarly theories of efficient coding suggested the visual system could compress the world by learning to represent features where information was concentrated, such as contours. This view was validated by the discovery that neurons in posterior visual cortex respond to edges and curvature. Still, it remains unclear what other information-rich features are encoded by neurons in more anterior cortical regions (e.g., inferotemporal cortex). Here, we use a generative deep neural network to synthesize images guided by neuronal responses from across the visuocortical hierarchy, using floating microelectrode arrays in areas V1, V4 and inferotemporal cortex of two macaque monkeys. We hypothesize these images (“prototypes”) represent such predicted information-rich features. Prototypes vary across areas, show moderate complexity, and resemble salient visual attributes and semantic content of natural images, as indicated by the animals’ gaze behavior. This suggests the code for object recognition represents compressed features of behavioral relevance, an underexplored aspect of efficient coding.


2010 ◽  
Vol 22 (7) ◽  
pp. 1812-1836 ◽  
Author(s):  
Laurent U. Perrinet

Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. Indeed, different models of sparse coding, coupled with Hebbian learning and homeostasis, have been proposed that successfully match the observed emergent response. However, the specific role of homeostasis in learning such sparse representations is still largely unknown. By quantitatively assessing the efficiency of the neural representation during learning, we derive a cooperative homeostasis mechanism that optimally tunes the competition between neurons within the sparse coding algorithm. We apply this homeostasis while learning small patches taken from natural images and compare its efficiency with state-of-the-art algorithms. Results show that while different sparse coding algorithms give similar coding results, the homeostasis provides an optimal balance for the representation of natural images within the population of neurons. Competition in sparse coding is optimized when it is fair. By contributing to optimizing statistical competition across neurons, homeostasis is crucial in providing a more efficient solution to the emergence of independent components.


2016 ◽  
Vol 28 (2) ◽  
pp. 305-326 ◽  
Author(s):  
Xue-Xin Wei ◽  
Alan A. Stocker

Fisher information is generally believed to represent a lower bound on mutual information (Brunel & Nadal, 1998 ), a result that is frequently used in the assessment of neural coding efficiency. However, we demonstrate that the relation between these two quantities is more nuanced than previously thought. For example, we find that in the small noise regime, Fisher information actually provides an upper bound on mutual information. Generally our results show that it is more appropriate to consider Fisher information as an approximation rather than a bound on mutual information. We analytically derive the correspondence between the two quantities and the conditions under which the approximation is good. Our results have implications for neural coding theories and the link between neural population coding and psychophysically measurable behavior. Specifically, they allow us to formulate the efficient coding problem of maximizing mutual information between a stimulus variable and the response of a neural population in terms of Fisher information. We derive a signature of efficient coding expressed as the correspondence between the population Fisher information and the distribution of the stimulus variable. The signature is more general than previously proposed solutions that rely on specific assumptions about the neural tuning characteristics. We demonstrate that it can explain measured tuning characteristics of cortical neural populations that do not agree with previous models of efficient coding.


2019 ◽  
Author(s):  
Kai Lu ◽  
Wanyi Liu ◽  
Kelsey Dutta ◽  
Jonathan B. Fritz ◽  
Shihab A. Shamma

AbstractNatural sounds such as vocalizations often have co-varying acoustic attributes where one acoustic feature can be predicted from another, resulting in redundancy in neural coding. It has been proposed that sensory systems are able to detect such covariation and adapt to reduce redundancy, leading to more efficient neural coding. Results of recent psychoacoustic studies suggest that, following passive exposure to sounds in which temporal and spectral attributes covaried in a correlated fashion, the auditory system adapts to efficiently encode the two co-varying dimensions as a single dimension, at the cost of lost sensitivity to the orthogonal dimension. Here we explore the neural basis of this psychophysical phenomenon by recording single-unit responses from primary auditory cortex (A1) in awake ferrets exposed passively to stimuli with two correlated attributes in the temporal and spectral domain similar to that utilized in the psychoacoustic experiments. We found that: (1) the signal-to-noise (SNR) ratio of spike rate coding of cortical responses driven by sounds with correlated attributes was reduced along the orthogonal dimension; while the SNR ratio remained intact along the exposure dimension; (2) Mutual information of spike temporal coding increased only along the exposure dimension; (3) correlation between neurons tuned to the two covarying attributes decreased after exposure; (4) these exposure effects still occurred if sounds were correlated along two acoustic dimensions, but varied randomly along a third dimension. These neurophysiological results are consistent with the Efficient Learning Hypothesis and may deepen our understanding of how the auditory system represents acoustic regularities and covariance.SignificanceIn the Efficient Coding (EC) hypothesis, proposed by Barlow in 1961, the neural code in sensory systems efficiently encodes natural stimuli by minimizing the number of spikes to transmit a sensory signal. Results of recent psychoacoustic studies are consistent with the EC hypothesis, showing that following passive exposure to stimuli with correlated attributes, the auditory system adapts so as to more efficiently encode the two co-varying dimensions as a single dimension. In the current neurophysiological experiments, using a similar stimulus design and experimental paradigm to the psychoacoustic studies of Stilp and colleagues (2010, 2011, 2012, 2016), we recorded responses from single neurons in the auditory cortex of the awake ferret, showing adaptive efficient neural coding of correlated acoustic properties.


Sign in / Sign up

Export Citation Format

Share Document