scholarly journals Spatially tuned normalization explains attention modulation variance within neurons

2017 ◽  
Vol 118 (3) ◽  
pp. 1903-1913 ◽  
Author(s):  
Amy M. Ni ◽  
John H. R. Maunsell

Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803–813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375–1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical area can be largely explained by between-neuron differences in normalization strength. Here we demonstrate that attention modulation size varies within neurons as well and that this variance is largely explained by within-neuron differences in normalization strength. We provide a new spatially tuned normalization model that explains this broad range of observed normalization and attention effects.

2017 ◽  
Vol 118 (1) ◽  
pp. 203-218 ◽  
Author(s):  
Erin Goddard ◽  
Samuel G. Solomon ◽  
Thomas A. Carlson

The middle-temporal area (MT) of primate visual cortex is critical in the analysis of visual motion. Single-unit studies suggest that the response dynamics of neurons within area MT depend on stimulus features, but how these dynamics emerge at the population level, and how feature representations interact, is not clear. Here, we used multivariate classification analysis to study how stimulus features are represented in the spiking activity of populations of neurons in area MT of marmoset monkey. Using representational similarity analysis we distinguished the emerging representations of moving grating and dot field stimuli. We show that representations of stimulus orientation, spatial frequency, and speed are evident near the onset of the population response, while the representation of stimulus direction is slower to emerge and sustained throughout the stimulus-evoked response. We further found a spatiotemporal asymmetry in the emergence of direction representations. Representations for high spatial frequencies and low temporal frequencies are initially orientation dependent, while those for high temporal frequencies and low spatial frequencies are more sensitive to motion direction. Our analyses reveal a complex interplay of feature representations in area MT population response that may explain the stimulus-dependent dynamics of motion vision. NEW & NOTEWORTHY Simultaneous multielectrode recordings can measure population-level codes that previously were only inferred from single-electrode recordings. However, many multielectrode recordings are analyzed using univariate single-electrode analysis approaches, which fail to fully utilize the population-level information. Here, we overcome these limitations by applying multivariate pattern classification analysis and representational similarity analysis to large-scale recordings from middle-temporal area (MT) in marmoset monkeys. Our analyses reveal a dynamic interplay of feature representations in area MT population response.


2017 ◽  
Author(s):  
Tristan A. Chaplin ◽  
Benjamin J. Allitt ◽  
Maureen A. Hagan ◽  
Nicholas S. Price ◽  
Ramesh Rajan ◽  
...  

AbstractNeurons in the Middle Temporal area (MT) of the primate cerebral cortex respond to moving visual stimuli. The sensitivity of MT neurons to motion signals can be characterized by using random-dot stimuli, in which the strength of the motion signal is manipulated by adding different levels of noise (elements that move in random directions). In macaques, this has allowed the calculation of “neurometric” thresholds. We characterized the responses of MT neurons in sufentanil/nitrous oxide anesthetized marmoset monkeys, a species which has attracted considerable recent interest as an animal model for vision research. We found that MT neurons show a wide range of neurometric thresholds, and that the responses of the most sensitive neurons could account for the behavioral performance of macaques and humans. We also investigated factors that contributed to the wide range of observed thresholds. The difference in firing rate between responses to motion in the preferred and null directions was the most effective predictor of neurometric threshold, whereas the direction tuning bandwidth had no correlation with the threshold. We also showed that it is possible to obtain reliable estimates of neurometric thresholds using stimuli that were not highly optimized for each neuron, as is often necessary when recording from large populations of neurons with different receptive field concurrently, as was the case in this study. These results demonstrate that marmoset MT shows an essential physiological similarity to macaque MT, and suggest that its neurons are capable of representing motion signals that allow for comparable motion-in-noise judgments.New and NoteworthyWe report the activity of neurons in marmoset MT in response to random-dot motion stimuli of varying coherence. The information carried by individual MT neurons was comparable to that of the macaque, and that the maximum firing rates were a strong predictor of sensitivity. Our study provides key information regarding the neural basis of motion perception in the marmoset, a small primate species that is becoming increasingly popular as an experimental model.


2010 ◽  
Vol 104 (2) ◽  
pp. 960-971 ◽  
Author(s):  
Joonyeol Lee ◽  
John H. R. Maunsell

It remains unclear how attention affects the tuning of individual neurons in visual cerebral cortex. Some observations suggest that attention preferentially enhances responses to low contrast stimuli, whereas others suggest that attention proportionally affects responses to all stimuli. Resolving how attention affects responses to different stimuli is essential for understanding the mechanism by which it acts. To explore the effects of attention on stimuli of different contrasts, we recorded from individual neurons in the middle temporal visual area (MT) of rhesus monkeys while shifting their attention between preferred and nonpreferred stimuli within their receptive fields. This configuration results in robust attentional modulation that makes it possible to readily distinguish whether attention acts preferentially on low contrast stimuli. We found no evidence for greater enhancement of low contrast stimuli. Instead, the strong attentional modulations were well explained by a model in which attention proportionally enhances responses to stimuli of all contrasts. These data, together with observations on the effects of attention on responses to other stimulus dimensions, suggest that the primary effect of attention in visual cortex may be to simply increase the strength of responses to all stimuli by the same proportion.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Tao Yao ◽  
Madhura Ketkar ◽  
Stefan Treue ◽  
B Suresh Krishna

Maintaining attention at a task-relevant spatial location while making eye-movements necessitates a rapid, saccade-synchronized shift of attentional modulation from the neuronal population representing the task-relevant location before the saccade to the one representing it after the saccade. Currently, the precise time at which spatial attention becomes fully allocated to the task-relevant location after the saccade remains unclear. Using a fine-grained temporal analysis of human peri-saccadic detection performance in an attention task, we show that spatial attention is fully available at the task-relevant location within 30 milliseconds after the saccade. Subjects tracked the attentional target veridically throughout our task: i.e. they almost never responded to non-target stimuli. Spatial attention and saccadic processing therefore co-ordinate well to ensure that relevant locations are attentionally enhanced soon after the beginning of each eye fixation.


1992 ◽  
Vol 68 (1) ◽  
pp. 164-181 ◽  
Author(s):  
J. F. Olavarria ◽  
E. A. DeYoe ◽  
J. J. Knierim ◽  
J. M. Fox ◽  
D. C. van Essen

1. We studied how neurons in the middle temporal visual area (MT) of anesthetized macaque monkeys responded to textured and nontextured visual stimuli. Stimuli contained a central rectangular ,figure- that was either uniform in luminance or consisted of an array of oriented line segments. The figure moved at constant velocity in one of four orthogonal directions. The region surrounding the figure was either uniform in luminance or contained a texture array (whose elements were identical or orthogonal in orientation to those of the figure), and it either was stationary or moved along with the figure. 2. A textured figure moving across a stationary textured background (,texture bar- stimulus) often elicited vigorous neural responses, but, on average, the responses to texture bars were significantly smaller than to solid (uniform luminance) bars. 3. Many cells showed direction selectivity that was similar for both texture bars and solid bars. However, on average, the direction selectivity measured when texture bars were used was significantly smaller than that for solid bars, and many cells lost significant direction selectivity altogether. The reduction in direction selectivity for texture bars generally reflected a combination of decreased responsiveness in the preferred direction and increased responsiveness in the null (opposite to preferred) direction. 4. Responses to a texture bar in the absence of a texture background (,texture bar alone-) were very similar to the responses to solid bars both in the magnitude of response and in the degree of direction selectivity. Conversely, adding a static texture surround to a moving solid bar reduced direction selectivity on average without a reduction in response magnitude. These results indicate that the static surround is largely responsible for the differences in direction selectivity for texture bars versus solid bars. 5. In the majority of MT cells studied, responses to a moving texture bar were largely independent of whether the elements in the bar were of the same orientation as the background elements or of the orthogonal orientation. Thus, for the class of stimuli we used, orientation contrast does not markedly affect the responses of MT neurons to moving texture patterns. 6. The optimum figure length and the shapes of the length tuning curves determined with the use of solid bars and texture bars differed significantly in most of the cells examined. Thus neurons in MT are not simply selective for a particular figure shape independent of whatever cues are used to delineate the figure.


2010 ◽  
Vol 22 (2) ◽  
pp. 347-361 ◽  
Author(s):  
David V. Smith ◽  
Ben Davis ◽  
Kathy Niu ◽  
Eric W. Healy ◽  
Leonardo Bonilha ◽  
...  

Neuroimaging studies suggest that a fronto-parietal network is activated when we expect visual information to appear at a specific spatial location. Here we examined whether a similar network is involved for auditory stimuli. We used sparse fMRI to infer brain activation while participants performed analogous visual and auditory tasks. On some trials, participants were asked to discriminate the elevation of a peripheral target. On other trials, participants made a nonspatial judgment. We contrasted trials where the participants expected a peripheral spatial target to those where they were cued to expect a central target. Crucially, our statistical analyses were based on trials where stimuli were anticipated but not presented, allowing us to directly infer perceptual orienting independent of perceptual processing. This is the first neuroimaging study to use an orthogonal-cuing paradigm (with cues predicting azimuth and responses involving elevation discrimination). This aspect of our paradigm is important, as behavioral cueing effects in audition are classically only observed when participants are asked to make spatial judgments. We observed similar fronto-parietal activation for both vision and audition. In a second experiment that controlled for stimulus properties and task difficulty, participants made spatial and temporal discriminations about musical instruments. We found that the pattern of brain activation for spatial selection of auditory stimuli was remarkably similar to what we found in our first experiment. Collectively, these results suggest that the neural mechanisms supporting spatial attention are largely similar across both visual and auditory modalities.


Sign in / Sign up

Export Citation Format

Share Document