scholarly journals Visual processing mode switching regulated by VIP cells

2016 ◽  
Author(s):  
Jung Hoon Lee ◽  
Stefan Mihalas

AbstractThe responses of neurons in mouse primary visual cortex (V1) to visual stimuli depend on behavioral states. Specifically, surround suppression is reduced during locomotion. Although locomotion-induced vasoactive intestinal polypeptide positive (VIP) interneuron depolarization can account for the reduction of surround suppression, the functions of VIP cell depolarization are not fully understood. Here we utilize a firing rate model and a computational model to elucidate the potential functions of VIP cell depolarization during locomotion. Our analyses suggest 1) that surround suppression sharpens the visual responses in V1 to a stationary scene, 2) that depolarized VIP cells enhance V1 responses to moving objects by reducing self-induced surround suppression and 3) that during locomotion V1 neuron responses to some features of the moving objects can be selectively enhanced. Thus, VIP cells regulate surround suppression to allow pyramidal neurons to optimally encode visual information independent of behavioral state.

Science ◽  
2019 ◽  
Vol 363 (6422) ◽  
pp. 64-69 ◽  
Author(s):  
Riccardo Beltramo ◽  
Massimo Scanziani

Visual responses in the cerebral cortex are believed to rely on the geniculate input to the primary visual cortex (V1). Indeed, V1 lesions substantially reduce visual responses throughout the cortex. Visual information enters the cortex also through the superior colliculus (SC), but the function of this input on visual responses in the cortex is less clear. SC lesions affect cortical visual responses less than V1 lesions, and no visual cortical area appears to entirely rely on SC inputs. We show that visual responses in a mouse lateral visual cortical area called the postrhinal cortex are independent of V1 and are abolished upon silencing of the SC. This area outperforms V1 in discriminating moving objects. We thus identify a collicular primary visual cortex that is independent of the geniculo-cortical pathway and is capable of motion discrimination.


1993 ◽  
Vol 10 (1) ◽  
pp. 21-30 ◽  
Author(s):  
Yong-Chang Wang ◽  
Shiying Jiang ◽  
Barrie J. Frost

AbstractThe responses of single cells to luminance, color and computer-generated spots, bars, kinematograms, and motion-in-depth stimuli were studied in the nucleus rotundus of pigeons. Systematic electrode penetrations revealed that there are several functionally distinct subdivisions within rotundus where six classes of visual-selective cells cluster. Cells in the dorsal-posterior zone of the nucleus respond selectively to motion in depth (i.e. an expanding or contracting figure in the visual field). Most cells recorded from the dorsal-anterior region responded selectively to the color of the stimulus. The firing rate of the cells in the anterior-central zone, however, is dramatically modulated by changing the level of illumination over the whole visual field. Cells in the ventral subdivision strongly respond to moving occlusion edges and very small moving objects, with either excitatory or inhibitory responses. These results indicate that visual information processing of color, ambient illumination, and motion in depth are segregated into different subdivisions at the level of nucleus rotundus in the avian brain.


2015 ◽  
Vol 113 (7) ◽  
pp. 2605-2617 ◽  
Author(s):  
Henry J. Alitto ◽  
W. Martin Usrey

Extraclassical surround suppression strongly modulates responses of neurons in the retina, lateral geniculate nucleus (LGN), and primary visual cortex. Although a great deal is known about the spatial properties of extraclassical suppression and the role it serves in stimulus size tuning, relatively little is known about how extraclassical suppression shapes visual processing in the temporal domain. We recorded the spiking activity of retinal ganglion cells and LGN neurons in the cat to test the hypothesis that extraclassical suppression influences temporal features of visual responses in the early visual system. Our results demonstrate that extraclassical suppression not only shifts the distribution of interspike intervals in a manner that decreases the efficacy of neuronal communication, it also decreases the reliability of neuronal responses to visual stimuli and it decreases the duration of visual responses, an effect that underlies a rightward shift in the temporal frequency tuning of LGN neurons. Taken together, these results reveal a dynamic relationship between extraclassical suppression and the temporal features of neuronal responses.


Author(s):  
Alice Mado Proverbio ◽  
and Alberto Zani

A hemispheric asymmetry is known for the processing of global vs. local visual information. In this study, we investigated the existence of a hemispheric asymmetry for visual processing of low vs. high spatial frequency gratings. Event-related potentials were recorded in a group of healthy right-handed volunteers from 30 scalp sites. Six types of stimuli (1.5, 3 and 6 c/deg gratings) were randomly flashed 180 times in the left and right upper hemi-fields. Stimulus duration was 80 ms and ISI ranged between 850-1000 ms. Participants had to pay attention and respond to targets based on their spatial frequency and location, or to passively look at the stimuli. C1 and P1 visual responses, as well as a later Selection negativity and a P300 components of ERPs were quantified and subjected to repeated-measure ANOVAs. Overall, performance was faster for the RVF, thus suggesting a left hemispheric advantage for attentional selection of local elements. Similarly, the analysis of mean area amplitude of C1 (60-110 ms) sensory response showed a stronger attentional effect (F+L+ vs. F-L+) at left occipital areas, thus suggesting the sensory nature of this hemispheric asymmetry.


2020 ◽  
Author(s):  
Liang Liang ◽  
Alex Fratzl ◽  
Omar El Mansour ◽  
Jasmine D.S. Reggiani ◽  
Chinfei Chen ◽  
...  

SummaryHow sensory information is processed by the brain can depend on behavioral state. In the visual thalamus and cortex, arousal/locomotion is associated with changes in the magnitude of responses to visual stimuli. Here, we asked whether such modulation of visual responses might already occur at an earlier stage in this visual pathway. We measured neural activity of retinal axons using wide-field and two-photon calcium imaging in awake mouse thalamus across arousal states associated with different pupil sizes. Surprisingly, visual responses to drifting gratings in retinal axonal boutons were robustly modulated by arousal level, in a manner that varied across stimulus dimensions and across functionally distinct subsets of boutons. At low and intermediate spatial frequencies, the majority of boutons were suppressed by arousal. In contrast, at high spatial frequencies, the proportions of boutons showing enhancement or suppression were more similar, particularly for boutons tuned to regions of visual space ahead of the mouse. Arousal-related modulation also varied with a bouton’s sensitivity to luminance changes and direction of motion, with greater response suppression in boutons tuned to luminance decrements vs. increments, and in boutons preferring motion along directions or axes of optic flow. Together, our results suggest that differential filtering of distinct visual information channels by arousal state occurs at very early stages of visual processing, before the information is transmitted to neurons in visual thalamus. Such early filtering may provide an efficient means of optimizing central visual processing and perception of state-relevant visual stimuli.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2021 ◽  
Author(s):  
Ning Mei ◽  
Roberto Santana ◽  
David Soto

AbstractDespite advances in the neuroscience of visual consciousness over the last decades, we still lack a framework for understanding the scope of unconscious processing and how it relates to conscious experience. Previous research observed brain signatures of unconscious contents in visual cortex, but these have not been identified in a reliable manner, with low trial numbers and signal detection theoretic constraints not allowing to decisively discard conscious perception. Critically, the extent to which unconscious content is represented in high-level processing stages along the ventral visual stream and linked prefrontal areas remains unknown. Using a within-subject, high-precision, highly-sampled fMRI approach, we show that unconscious contents, even those associated with null sensitivity, can be reliably decoded from multivoxel patterns that are highly distributed along the ventral visual pathway and also involving prefrontal substrates. Notably, the neural representation in these areas generalised across conscious and unconscious visual processing states, placing constraints on prior findings that fronto-parietal substrates support the representation of conscious contents and suggesting revisions to models of consciousness such as the neuronal global workspace. We then provide a computational model simulation of visual information processing/representation in the absence of perceptual sensitivity by using feedforward convolutional neural networks trained to perform a similar visual task to the human observers. The work provides a novel framework for pinpointing the neural representation of unconscious knowledge across different task domains.


F1000Research ◽  
2013 ◽  
Vol 2 ◽  
pp. 58 ◽  
Author(s):  
J Daniel McCarthy ◽  
Colin Kupitz ◽  
Gideon P Caplovitz

Our perception of an object’s size arises from the integration of multiple sources of visual information including retinal size, perceived distance and its size relative to other objects in the visual field. This constructive process is revealed through a number of classic size illusions such as the Delboeuf Illusion, the Ebbinghaus Illusion and others illustrating size constancy. Here we present a novel variant of the Delbouef and Ebbinghaus size illusions that we have named the Binding Ring Illusion. The illusion is such that the perceived size of a circular array of elements is underestimated when superimposed by a circular contour – a binding ring – and overestimated when the binding ring slightly exceeds the overall size of the array. Here we characterize the stimulus conditions that lead to the illusion, and the perceptual principles that underlie it. Our findings indicate that the perceived size of an array is susceptible to the assimilation of an explicitly defined superimposed contour. Our results also indicate that the assimilation process takes place at a relatively high level in the visual processing stream, after different spatial frequencies have been integrated and global shape has been constructed. We hypothesize that the Binding Ring Illusion arises due to the fact that the size of an array of elements is not explicitly defined and therefore can be influenced (through a process of assimilation) by the presence of a superimposed object that does have an explicit size.


Sign in / Sign up

Export Citation Format

Share Document