scholarly journals Flow stimuli reveal ecologically appropriate responses in mouse visual cortex

2018 ◽  
Vol 115 (44) ◽  
pp. 11304-11309 ◽  
Author(s):  
Luciano Dyballa ◽  
Mahmood S. Hoseini ◽  
Maria C. Dadarlat ◽  
Steven W. Zucker ◽  
Michael P. Stryker

Assessments of the mouse visual system based on spatial-frequency analysis imply that its visual capacity is low, with few neurons responding to spatial frequencies greater than 0.5 cycles per degree. However, visually mediated behaviors, such as prey capture, suggest that the mouse visual system is more precise. We introduce a stimulus class—visual flow patterns—that is more like what the mouse would encounter in the natural world than are sine-wave gratings but is more tractable for analysis than are natural images. We used 128-site silicon microelectrodes to measure the simultaneous responses of single neurons in the primary visual cortex (V1) of alert mice. While holding temporal-frequency content fixed, we explored a class of drifting patterns of black or white dots that have energy only at higher spatial frequencies. These flow stimuli evoke strong visually mediated responses well beyond those predicted by spatial-frequency analysis. Flow responses predominate in higher spatial-frequency ranges (0.15–1.6 cycles per degree), many are orientation or direction selective, and flow responses of many neurons depend strongly on sign of contrast. Many cells exhibit distributed responses across our stimulus ensemble. Together, these results challenge conventional linear approaches to visual processing and expand our understanding of the mouse’s visual capacity to behaviorally relevant ranges.

2018 ◽  
Author(s):  
Luciano Dyballa ◽  
Mahmood S. Hoseini ◽  
Maria C. Dadarlat ◽  
Steven W. Zucker ◽  
Michael P. Stryker

AbstractAssessments of the mouse visual system based on spatial frequency analysis imply that its visual capacity is low, with few neurons responding to spatial frequencies greater than 0.5 cycles/degree. However, visually-mediated behaviors, such as prey capture, suggest that the mouse visual system is more precise. We introduce a new stimulus class—visual flow patterns—that is more like what the mouse would encounter in the natural world than are sine-wave gratings but is more tractable for analysis than are natural images. We used 128-site silicon microelectrodes to measure the simultaneous responses of single neurons in the primary visual cortex (V1) of alert mice. While holding temporal-frequency content fixed, we explored a class of drifting patterns of black or white dots that have energy only at higher spatial frequencies. These flow stimuli evoke strong visually-mediated responses well beyond those predicted by spatial frequency analysis. Flow responses predominate in higher spatial-frequency ranges (0.15–1.6 cycles/degree); many are orientation- or direction-selective; and flow responses of many neurons depend strongly on sign of contrast. Many cells exhibit distributed responses across our stimulus ensemble. Together, these results challenge conventional linear approaches to visual processing and expand our understanding of the mouse’s visual capacity to behaviorally-relevant ranges.Significance StatementThe visual system of the mouse is now widely studied as a model for development and disease in humans. Studies of its primary visual cortex (V1) using conventional grating stimuli to construct linear-nonlinear receptive fields suggest that the mouse must have very poor vision. Using novel stimuli resembling the flow of images across the retina as the mouse moves through the grass, we find that most V1 neurons respond reliably to very much finer details of the visual scene than previously believed. Our findings suggest that the conventional notion of a unique receptive field does not capture the operation of the neural network in mouse V1.


2021 ◽  
Author(s):  
Felix Bartsch ◽  
Bruce G Cumming ◽  
Daniel A Butts

To understand the complexity of stimulus selectivity in primary visual cortex (V1), models constructed to match observed responses to complex time-varying stimuli, instead of to explain responses to simple parametric stimuli, are increasingly used. While such models often can more accurately reflect the computations performed by V1 neurons in more natural visual environments, they do not by themselves provide insight into established measures of V1 neural selectivity such as receptive field size, spatial frequency tuning and phase invariance. Here, we suggest a series of analyses that can be directly applied to encoding models to link complex encoding models to more interpretable aspects of stimulus selectivity, applied to nonlinear models of V1 neurons recorded in awake macaque in response to random bar stimuli. In linking model properties to more classical measurements, we demonstrate several novel aspects of V1 selectivity not available to simpler experimental measurements. For example, we find that individual spatiotemporal elements of the V1 models often have a smaller spatial scale than the overall neuron sensitivity, and that this results in non-trivial tuning to spatial frequencies. Additionally, our proposed measures of nonlinear integration suggest that more classical classifications of V1 neurons into simple versus complex cells are spatial-frequency dependent. In total, rather than obfuscate classical characterizations of V1 neurons, model-based characterizations offer a means to more fully understand their selectivity, and provide a means to link their classical tuning properties to their roles in more complex, natural, visual processing.


i-Perception ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 204166952110179
Author(s):  
Andrea Pavan ◽  
Adriano Contillo ◽  
Filippo Ghin ◽  
Rita Donato ◽  
Matthew J. Foxwell ◽  
...  

Glass patterns (GPs) have been widely employed to investigate the mechanisms underlying processing of global form from locally oriented cues. The current study aimed to psychophysically investigate the level at which global orientation is extracted from translational GPs using the tilt after-effect (TAE) and manipulating the spatiotemporal properties of the adapting pattern. We adapted participants to translational GPs and tested with sinewave gratings. In Experiment 1, we investigated whether orientation-selective units are sensitive to the temporal frequency of the adapting GP. We used static and dynamic translational GPs, with dynamic GPs refreshed at different temporal frequencies. In Experiment 2, we investigated the spatial frequency selectivity of orientation-selective units by manipulating the spatial frequency content of the adapting GPs. The results showed that the TAE peaked at a temporal frequency of ∼30 Hz, suggesting that orientation-selective units responding to translational GPs are sensitive to high temporal frequencies. In addition, TAE from translational GPs peaked at lower spatial frequencies than the dipoles’ spatial constant. These effects are consistent with form-motion integration at low and intermediate levels of visual processing.


2005 ◽  
Vol 94 (1) ◽  
pp. 775-787 ◽  
Author(s):  
Tanya I. Baker ◽  
Naoum P. Issa

In the earliest cortical stages of visual processing, a scene is represented in different functional domains selective for specific features. Maps of orientation and spatial frequency preference have been described in the primary visual cortex using simple sinusoidal grating stimuli. However, recent imaging experiments suggest that the maps of these two spatial parameters are not sufficient to describe patterns of activity in different orientation domains generated in response to complex, moving stimuli. A model of cortical organization is presented in which cortical temporal frequency tuning is superimposed on the maps of orientation and spatial frequency tuning. The maps of these three tuning properties are sufficient to describe the activity in orientation domains that have been measured in response to drifting complex images. The model also makes specific predictions about how moving images are represented in different spatial frequency domains. These results suggest that the tangential organization of primary visual cortex can be described by a set of maps of separable neuronal receptive field features including maps of orientation, spatial frequency, and temporal frequency tuning properties.


2017 ◽  
Vol 117 (1) ◽  
pp. 388-402 ◽  
Author(s):  
Michael A. Cohen ◽  
George A. Alvarez ◽  
Ken Nakayama ◽  
Talia Konkle

Visual search is a ubiquitous visual behavior, and efficient search is essential for survival. Different cognitive models have explained the speed and accuracy of search based either on the dynamics of attention or on similarity of item representations. Here, we examined the extent to which performance on a visual search task can be predicted from the stable representational architecture of the visual system, independent of attentional dynamics. Participants performed a visual search task with 28 conditions reflecting different pairs of categories (e.g., searching for a face among cars, body among hammers, etc.). The time it took participants to find the target item varied as a function of category combination. In a separate group of participants, we measured the neural responses to these object categories when items were presented in isolation. Using representational similarity analysis, we then examined whether the similarity of neural responses across different subdivisions of the visual system had the requisite structure needed to predict visual search performance. Overall, we found strong brain/behavior correlations across most of the higher-level visual system, including both the ventral and dorsal pathways when considering both macroscale sectors as well as smaller mesoscale regions. These results suggest that visual search for real-world object categories is well predicted by the stable, task-independent architecture of the visual system. NEW & NOTEWORTHY Here, we ask which neural regions have neural response patterns that correlate with behavioral performance in a visual processing task. We found that the representational structure across all of high-level visual cortex has the requisite structure to predict behavior. Furthermore, when directly comparing different neural regions, we found that they all had highly similar category-level representational structures. These results point to a ubiquitous and uniform representational structure in high-level visual cortex underlying visual object processing.


Sign in / Sign up

Export Citation Format

Share Document