scholarly journals Most Superficial Sublamina of Rat Superior Colliculus: Neuronal Response Properties and Correlates With Perceptual Figure–Ground Segregation

2007 ◽  
Vol 98 (1) ◽  
pp. 161-177 ◽  
Author(s):  
S. V. Girman ◽  
R. D. Lund

The uppermost layer (stratum griseum superficiale, SGS) of the superior colliculus (SC) provides an important gateway from the retina to the visual extrastriate and visuomotor systems. The majority of attention has been given to the role of this “visual” SC in saccade generation and target selection and it is generally considered to be less important in visual perception. We have found, however, that in the rat SGS1, the most superficial division of the SGS, the neurons perform very sophisticated analysis of visual information. First, in studying their responses with a variety of flashing stimuli we found that the neurons respond not to brightness changes per se, but to the appearance and/or disappearance of visual shapes in their receptive fields (RFs). Contrary to conventional RFs of neurons at the early stages of visual processing, the RFs in SGS1 cannot be described in terms of fixed spatial distribution of excitatory and inhibitory inputs. Second, SGS1 neurons showed robust orientation tuning to drifting gratings and orientation-specific modulation of the center response from surround. These are features previously seen only in visual cortical neurons and are considered to be involved in “contour” perception and figure–ground segregation. Third, responses of SGS1 neurons showed complex dynamics; typically the response tuning became progressively sharpened with repetitive grating periods. We conclude that SGS1 neurons are involved in considerably more complex analysis of retinal input than was previously thought. SGS1 may participate in early stages of figure–ground segregation and have a role in low-resolution nonconscious vision as encountered after visual decortication.

1999 ◽  
Vol 11 (3) ◽  
pp. 300-311 ◽  
Author(s):  
Edmund T. Rolls ◽  
Martin J. Tovée ◽  
Stefano Panzeri

Backward masking can potentially provide evidence of the time needed for visual processing, a fundamental constraint that must be incorporated into computational models of vision. Although backward masking has been extensively used psychophysically, there is little direct evidence for the effects of visual masking on neuronal responses. To investigate the effects of a backward masking paradigm on the responses of neurons in the temporal visual cortex, we have shown that the response of the neurons is interrupted by the mask. Under conditions when humans can just identify the stimulus, with stimulus onset asynchronies (SOA) of 20 msec, neurons in macaques respond to their best stimulus for approximately 30 msec. We now quantify the information that is available from the responses of single neurons under backward masking conditions when two to six faces were shown. We show that the information available is greatly decreased as the mask is brought closer to the stimulus. The decrease is more marked than the decrease in firing rate because it is the selective part of the firing that is especially attenuated by the mask, not the spontaneous firing, and also because the neuronal response is more variable at short SOAs. However, even at the shortest SOA of 20 msec, the information available is on average 0.1 bits. This compares to 0.3 bits with only the 16-msec target stimulus shown and a typical value for such neurons of 0.4 to 0.5 bits with a 500-msec stimulus. The results thus show that considerable information is available from neuronal responses even under backward masking conditions that allow the neurons to have their main response in 30 msec. This provides evidence for how rapid the processing of visual information is in a cortical area and provides a fundamental constraint for understanding how cortical information processing operates.


2005 ◽  
Vol 94 (2) ◽  
pp. 1336-1345 ◽  
Author(s):  
Bartlett D. Moore ◽  
Henry J. Alitto ◽  
W. Martin Usrey

The activity of neurons in primary visual cortex is influenced by the orientation, contrast, and temporal frequency of a visual stimulus. This raises the question of how these stimulus properties interact to shape neuronal responses. While past studies have shown that the bandwidth of orientation tuning is invariant to stimulus contrast, the influence of temporal frequency on orientation-tuning bandwidth is unknown. Here, we investigate the influence of temporal frequency on orientation tuning and direction selectivity in area 17 of ferret visual cortex. For both simple cells and complex cells, measures of orientation-tuning bandwidth (half-width at half-maximum response) are ∼20–25° across a wide range of temporal frequencies. Thus cortical neurons display temporal-frequency invariant orientation tuning. In contrast, direction selectivity is typically reduced, and occasionally reverses, at nonpreferred temporal frequencies. These results show that the mechanisms contributing to the generation of orientation tuning and direction selectivity are differentially affected by the temporal frequency of a visual stimulus and support the notion that stability of orientation tuning is an important aspect of visual processing.


2015 ◽  
Vol 113 (7) ◽  
pp. 2859-2870 ◽  
Author(s):  
Carolyn J. Perry ◽  
Lauren E. Sergio ◽  
J. Douglas Crawford ◽  
Mazyar Fallah

Often, the brain receives more sensory input than it can process simultaneously. Spatial attention helps overcome this limitation by preferentially processing input from a behaviorally-relevant location. Recent neuropsychological and psychophysical studies suggest that attention is deployed to near-hand space much like how the oculomotor system can deploy attention to an upcoming gaze position. Here we provide the first neuronal evidence that the presence of a nearby hand enhances orientation selectivity in early visual processing area V2. When the hand was placed outside the receptive field, responses to the preferred orientation were significantly enhanced without a corresponding significant increase at the orthogonal orientation. Consequently, there was also a significant sharpening of orientation tuning. In addition, the presence of the hand reduced neuronal response variability. These results indicate that attention is automatically deployed to the space around a hand, improving orientation selectivity. Importantly, this appears to be optimal for motor control of the hand, as opposed to oculomotor mechanisms which enhance responses without sharpening orientation selectivity. Effector-based mechanisms for visual enhancement thus support not only the spatiotemporal dissociation of gaze and reach, but also the optimization of vision for their separate requirements for guiding movements.


1993 ◽  
Vol 10 (5) ◽  
pp. 811-825 ◽  
Author(s):  
Simona Celebrini ◽  
Simon Thorpe ◽  
Yves Trotter ◽  
Michel Imbert

AbstractTo investigate the importance of feedback loops in visual information processing, we have analyzed the dynamic aspects of neuronal responses to oriented gratings in cortical area V1 of the awake primate. If recurrent feedback is important in generating orientation selectivity, the initial part of the neuronal response should be relatively poorly selective, and full orientation selectivity should only appear after a delay. Thus, by examining the dynamics of the neuronal responses it should be possible to assess the importance of feedback processes in the development of orientation selectivity. The results were base on a sample of 259 cells recorded in two monkeys, of which 89% were visually responsive. Of these, approximately two-thirds were orientation selective. Response latency varied considerably between neurons, ranging from a minimum of 41 ms to over 150 ms, although most had latencies of 50–70 ms. Orientation tuning (defined as the bandwidth at half-height) ranged from 16 deg to over 90 deg, with a mean value of around 55 deg. By examining the selectivity of these different neurons by 10-ms time slices, starting at the onset of the neuronal response, we found that the orientation selectivity of virtually every neuron was fully developed at the very start of the neuronal response. Indeed, many neurons showed a marked tendency to respond at somewhat longer latencies to stimuli that were nonoptimally oriented, with the result that orientation selectivity was highest at the very start of the neuronal response. Furthermore, there was no evidence that the neurons with the shortest onset latencies were less selective. Such evidence is inconsistent with the hypothesis that recurrent intracortical feedback plays an important role in the generation of orientation selectivity. Instead, we suggest that orientation selectivity is primarily generated using feedforward mechanisms, including feedforward inhibition. Such a strategy has the advantage of allowing orientation to be computed rapidly, and avoids the initially poorly selective neuronal responses that characterize processing involving recurrent loops.


2002 ◽  
Vol 357 (1424) ◽  
pp. 1063-1072 ◽  
Author(s):  
John H. R. Maunsell ◽  
Erik P. Cook

Attention to a visual stimulus typically increases the responses of cortical neurons to that stimulus. Because many studies have shown a close relationship between the performance of individual neurons and behavioural performance of animal subjects, it is important to consider how attention affects this relationship. Measurements of behavioural and neuronal performance taken from rhesus monkeys while they performed a motion detection task with two attentional states show that attention alters the relationship between behaviour and neuronal response. Notably, attention affects the relationship differently in different cortical visual areas. This indicates that a close relationship between neuronal and behavioural performance on a given task persists over changes in attentional state only within limited regions of visual cortex.


2021 ◽  
Vol 11 (8) ◽  
pp. 3397
Author(s):  
Gustavo Assunção ◽  
Nuno Gonçalves ◽  
Paulo Menezes

Human beings have developed fantastic abilities to integrate information from various sensory sources exploring their inherent complementarity. Perceptual capabilities are therefore heightened, enabling, for instance, the well-known "cocktail party" and McGurk effects, i.e., speech disambiguation from a panoply of sound signals. This fusion ability is also key in refining the perception of sound source location, as in distinguishing whose voice is being heard in a group conversation. Furthermore, neuroscience has successfully identified the superior colliculus region in the brain as the one responsible for this modality fusion, with a handful of biological models having been proposed to approach its underlying neurophysiological process. Deriving inspiration from one of these models, this paper presents a methodology for effectively fusing correlated auditory and visual information for active speaker detection. Such an ability can have a wide range of applications, from teleconferencing systems to social robotics. The detection approach initially routes auditory and visual information through two specialized neural network structures. The resulting embeddings are fused via a novel layer based on the superior colliculus, whose topological structure emulates spatial neuron cross-mapping of unimodal perceptual fields. The validation process employed two publicly available datasets, with achieved results confirming and greatly surpassing initial expectations.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


2012 ◽  
Vol 2012 ◽  
pp. 1-17 ◽  
Author(s):  
Aurel Vasile Martiniuc ◽  
Alois Knoll

The information regarding visual stimulus is encoded in spike trains at the output of retina by retinal ganglion cells (RGCs). Among these, the directional selective cells (DSRGC) are signaling the direction of stimulus motion. DSRGCs' spike trains show accentuated periods of short interspike intervals (ISIs) framed by periods of isolated spikes. Here we use two types of visual stimulus, white noise and drifting bars, and show that short ISI spikes of DSRGCs spike trains are more often correlated to their preferred stimulus feature (that is, the direction of stimulus motion) and carry more information than longer ISI spikes. Firstly, our results show that correlation between stimulus and recorded neuronal response is best at short ISI spiking activity and decrease as ISI becomes larger. We then used grating bars stimulus and found that as ISI becomes shorter the directional selectivity is better and information rates are higher. Interestingly, for the less encountered type of DSRGC, known as ON-DSRGC, short ISI distribution and information rates revealed consistent differences when compared with the other directional selective cell type, the ON-OFF DSRGC. However, these findings suggest that ISI-based temporal filtering integrates a mechanism for visual information processing at the output of retina toward higher stages within early visual system.


Sign in / Sign up

Export Citation Format

Share Document