scholarly journals Coding strategy for surface luminance switches in the primary visual cortex of the awake monkey

2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Yi Yang ◽  
Tian Wang ◽  
Yang Li ◽  
Weifeng Dai ◽  
Guanzhong Yang ◽  
...  

AbstractBoth surface luminance and edge contrast of an object are essential features for object identification. However, cortical processing of surface luminance remains unclear. In this study, we aim to understand how the primary visual cortex (V1) processes surface luminance information across its different layers. We report that edge-driven responses are stronger than surface-driven responses in V1 input layers, but luminance information is coded more accurately by surface responses. In V1 output layers, the advantage of edge over surface responses increased eight times and luminance information was coded more accurately at edges. Further analysis of neural dynamics shows that such substantial changes for neural responses and luminance coding are mainly due to non-local cortical inhibition in V1’s output layers. Our results suggest that non-local cortical inhibition modulates the responses elicited by the surfaces and edges of objects, and that switching the coding strategy in V1 promotes efficient coding for luminance.

2003 ◽  
Vol 20 (1) ◽  
pp. 77-84 ◽  
Author(s):  
AN CAO ◽  
PETER H. SCHILLER

Relative motion information, especially relative speed between different input patterns, is required for solving many complex tasks of the visual system, such as depth perception by motion parallax and motion-induced figure/ground segmentation. However, little is known about the neural substrate for processing relative speed information. To explore the neural mechanisms for relative speed, we recorded single-unit responses to relative motion in the primary visual cortex (area V1) of rhesus monkeys while presenting sets of random-dot arrays moving at different speeds. We found that most V1 neurons were sensitive to the existence of a discontinuity in speed, that is, they showed higher responses when relative motion was presented compared to homogenous field motion. Seventy percent of the neurons in our sample responded predominantly to relative rather than to absolute speed. Relative speed tuning curves were similar at different center–surround velocity combinations. These relative motion-sensitive neurons in macaque area V1 probably contribute to figure/ground segmentation and motion discontinuity detection.


2020 ◽  
Author(s):  
Ali Almasi ◽  
Hamish Meffin ◽  
Shaun L. Cloherty ◽  
Yan Wong ◽  
Molis Yunzab ◽  
...  

AbstractVisual object identification requires both selectivity for specific visual features that are important to the object’s identity and invariance to feature manipulations. For example, a hand can be shifted in position, rotated, or contracted but still be recognised as a hand. How are the competing requirements of selectivity and invariance built into the early stages of visual processing? Typically, cells in the primary visual cortex are classified as either simple or complex. They both show selectivity for edge-orientation but complex cells develop invariance to edge position within the receptive field (spatial phase). Using a data-driven model that extracts the spatial structures and nonlinearities associated with neuronal computation, we show that the balance between selectivity and invariance in complex cells is more diverse than thought. Phase invariance is frequently partial, thus retaining sensitivity to brightness polarity, while invariance to orientation and spatial frequency are more extensive than expected. The invariance arises due to two independent factors: (1) the structure and number of filters and (2) the form of nonlinearities that act upon the filter outputs. Both vary more than previously considered, so primary visual cortex forms an elaborate set of generic feature sensitivities, providing the foundation for more sophisticated object processing.


2017 ◽  
Author(s):  
Santiago A. Cadena ◽  
George H. Denfield ◽  
Edgar Y. Walker ◽  
Leon A. Gatys ◽  
Andreas S. Tolias ◽  
...  

AbstractDespite great efforts over several decades, our best models of primary visual cortex (V1) still predict spiking activity quite poorly when probed with natural stimuli, highlighting our limited understanding of the nonlinear computations in V1. Recently, two approaches based on deep learning have been successfully applied to neural data: On the one hand, transfer learning from networks trained on object recognition worked remarkably well for predicting neural responses in higher areas of the primate ventral stream, but has not yet been used to model spiking activity in early stages such as V1. On the other hand, data-driven models have been used to predict neural responses in the early visual system (retina and V1) of mice, but not primates. Here, we test the ability of both approaches to predict spiking activity in response to natural images in V1 of awake monkeys. Even though V1 is rather at an early to intermediate stage of the visual system, we found that the transfer learning approach performed similarly well to the data-driven approach and both outperformed classical linear-nonlinear and wavelet-based feature representations that build on existing theories of V1. Notably, transfer learning using a pre-trained feature space required substantially less experimental time to achieve the same performance. In conclusion, multi-layer convolutional neural networks (CNNs) set the new state of the art for predicting neural responses to natural images in primate V1 and deep features learned for object recognition are better explanations for V1 computation than all previous filter bank theories. This finding strengthens the necessity of V1 models that are multiple nonlinearities away from the image domain and it supports the idea of explaining early visual cortex based on high-level functional goals.Author summaryPredicting the responses of sensory neurons to arbitrary natural stimuli is of major importance for understanding their function. Arguably the most studied cortical area is primary visual cortex (V1), where many models have been developed to explain its function. However, the most successful models built on neurophysiologists’ intuitions still fail to account for spiking responses to natural images. Here, we model spiking activity in primary visual cortex (V1) of monkeys using deep convolutional neural networks (CNNs), which have been successful in computer vision. We both trained CNNs directly to fit the data, and used CNNs trained to solve a high-level task (object categorization). With these approaches, we are able to outperform previous models and improve the state of the art in predicting the responses of early visual neurons to natural images. Our results have two important implications. First, since V1 is the result of several nonlinear stages, it should be modeled as such. Second, functional models of entire visual pathways, of which V1 is an early stage, do not only account for higher areas of such pathways, but also provide useful representations for V1 predictions.


2020 ◽  
Vol 30 (9) ◽  
pp. 5067-5087
Author(s):  
Ali Almasi ◽  
Hamish Meffin ◽  
Shaun L Cloherty ◽  
Yan Wong ◽  
Molis Yunzab ◽  
...  

Abstract Visual object identification requires both selectivity for specific visual features that are important to the object’s identity and invariance to feature manipulations. For example, a hand can be shifted in position, rotated, or contracted but still be recognized as a hand. How are the competing requirements of selectivity and invariance built into the early stages of visual processing? Typically, cells in the primary visual cortex are classified as either simple or complex. They both show selectivity for edge-orientation but complex cells develop invariance to edge position within the receptive field (spatial phase). Using a data-driven model that extracts the spatial structures and nonlinearities associated with neuronal computation, we quantitatively describe the balance between selectivity and invariance in complex cells. Phase invariance is frequently partial, while invariance to orientation and spatial frequency are more extensive than expected. The invariance arises due to two independent factors: (1) the structure and number of filters and (2) the form of nonlinearities that act upon the filter outputs. Both vary more than previously considered, so primary visual cortex forms an elaborate set of generic feature sensitivities, providing the foundation for more sophisticated object processing.


2008 ◽  
Vol 28 (39) ◽  
pp. 9890-9894 ◽  
Author(s):  
M. A. Williams ◽  
T. A. W. Visser ◽  
R. Cunnington ◽  
J. B. Mattingley

2020 ◽  
Author(s):  
Jan C. Frankowski ◽  
Andrzej T. Foik ◽  
Jiana R. Machhor ◽  
David C. Lyon ◽  
Robert F. Hunt

SummaryPrimary sensory areas of the mammalian neocortex have a remarkable degree of plasticity, allowing neural circuits to adapt to dynamic environments. However, little is known about the effect of traumatic brain injury on visual system function. Here we applied a mild focal contusion injury to primary visual cortex (V1) in adult mice. We found that, although V1 was largely intact in brain-injured mice, there was a reduction in the number of inhibitory interneurons that extended into deep cortical layers. In general, we found a preferential reduction of interneurons located in superficial layers, near the impact site, while interneurons positioned in deeper layers were better preserved. Three months after injury, V1 neurons showed dramatically reduced responses to visual stimuli and weaker orientation selectivity and tuning, consistent with the loss of cortical inhibition. Our results demonstrate that V1 neurons no longer robustly and stably encode visual input following a mild traumatic injury.HighlightsInhibitory neurons are lost throughout brain injured visual cortexVisually-evoked potentials are severely degraded after injuryInjured V1 neurons show weaker selectivity and tuning consistent with reduced interneurons


2020 ◽  
Author(s):  
Stewart Heitmann ◽  
G. Bard Ermentrout

AbstractThe majority of neurons in primary visual cortex respond selectively to bars of light that have a specific orientation and move in a specific direction. The spatial and temporal responses of such neurons are non-separable. How neurons accomplish that computational feat without resort to explicit time delays is unknown. We propose a novel neural mechanism whereby visual cortex computes non-separable responses by generating endogenous traveling waves of neural activity that resonate with the space-time signature of the visual stimulus. The spatiotemporal characteristics of the response are defined by the local topology of excitatory and inhibitory lateral connections in the cortex. We simulated the interaction between endogenous traveling waves and the visual stimulus using spatially distributed populations of excitatory and inhibitory neurons with Wilson-Cowan dynamics and inhibitory-surround coupling. Our model reliably detected visual gratings that moved with a given speed and direction provided that we incorporated neural competition to suppress false motion signals in the opposite direction. The findings suggest that endogenous traveling waves in visual cortex can impart direction-selectivity on neural responses without resort to explicit time delays. They also suggest a functional role for motion opponency in eliminating false motion signals.Author summaryIt is well established that the so-called ‘simple cells’ of the primary visual cortex respond preferentially to oriented bars of light that move across the visual field with a particular speed and direction. The spatiotemporal responses of such neurons are said to be non-separable because they cannot be constructed from independent spatial and temporal neural mechanisms. Contemporary theories of how neurons compute non-separable responses typically rely on finely tuned transmission delays between signals from disparate regions of the visual field. However the existence of such delays is controversial. We propose an alternative neural mechanism for computing non-separable responses that does not require transmission delays. It instead relies on the predisposition of the cortical tissue to spontaneously generate spatiotemporal waves of neural activity that travel with a particular speed and direction. We propose that the endogenous wave activity resonates with the visual stimulus to elicit direction-selective neural responses to visual motion. We demonstrate the principle in computer models and show that competition between opposing neurons robustly enhances their ability to discriminate between visual gratings that move in opposite directions.


Sign in / Sign up

Export Citation Format

Share Document