Faculty Opinions recommendation of Spatial structure and symmetry of simple-cell receptive fields in macaque primary visual cortex.

Author(s):  
Bruno Olshausen
2002 ◽  
Vol 88 (1) ◽  
pp. 455-463 ◽  
Author(s):  
Dario L. Ringach

I present measurements of the spatial structure of simple-cell receptive fields in macaque primary visual cortex (area V1). Similar to previous findings in cat area 17, the spatial profile of simple-cell receptive fields in the macaque is well described by two-dimensional Gabor functions. A population analysis reveals that the distribution of spatial profiles in primary visual cortex lies approximately on a one-parameter family of filter shapes. Surprisingly, the receptive fields cluster into even- and odd-symmetry classes with a tendency for neurons that are well tuned in orientation and spatial frequency to have odd-symmetric receptive fields. The filter shapes predicted by two recent theories of simple-cell receptive field function, independent component analysis and sparse coding, are compared with the data. Both theories predict receptive fields with a larger number of subfields than observed in the experimental data. In addition, these theories do not generate receptive fields that are broadly tuned in orientation and low-pass in spatial frequency, which are commonly seen in monkey V1. The implications of these results for our understanding of image coding and representation in primary visual cortex are discussed.


1997 ◽  
Vol 9 (5) ◽  
pp. 959-970 ◽  
Author(s):  
Christian Piepenbrock ◽  
Helge Ritter ◽  
Klaus Obermayer

Correlation-based learning (CBL) has been suggested as the mechanism that underlies the development of simple-cell receptive fields in the primary visual cortex of cats, including orientation preference (OR) and ocular dominance (OD) (Linsker, 1986; Miller, Keller, & Stryker, 1989). CBL has been applied successfully to the development of OR and OD individually (Miller, Keller, & Stryker, 1989; Miller, 1994; Miyashita & Tanaka, 1991; Erwin, Obermayer, & Schulten, 1995), but the conditions for their joint development have not been studied (but see Erwin & Miller, 1995, for independent work on the same question) in contrast to competitive Hebbian models (Obermayer, Blasdel, & Schulten, 1992). In this article, we provide insight into why this has been the case: OR and OD decouple in symmetric CBL models, and a joint development of OR and OD is possible only in a parameter regime that depends on nonlinear mechanisms.


2012 ◽  
Vol 24 (10) ◽  
pp. 2700-2725 ◽  
Author(s):  
Takuma Tanaka ◽  
Toshio Aoyagi ◽  
Takeshi Kaneko

We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.


2019 ◽  
Author(s):  
Jun Zhuang ◽  
Rylan S Larsen ◽  
Kevin T Takasaki ◽  
Naveen D Ouellette ◽  
Tanya L Daigle ◽  
...  

Location-sensitive and motion-sensitive units are the two major functional types of feedforward projections from lateral genicular nucleus (LGN) to primary visual cortex (V1) in mouse. The distribution of these inputs in cortical depth remains under debate. By measuring the calcium activities of LGN axons in V1 of awake mice, we systematically mapped their functional and structural properties. Although both types distributed evenly across cortical depth, we found that they differ significantly across multiple modalities. Compared to the location-sensitive axons, which possessed confined spatial receptive fields, the motion-sensitive axons lacked spatial receptive fields, preferred lower temporal, higher spatial frequencies and had wider horizontal bouton spread. Furthermore, the motion-sensitive axons showed a strong depth-dependent motion direction bias while the location-sensitive axons showed a depth-independent OFF dominance. Overall, our results suggest a new model of receptive biases and laminar structure of thalamic inputs to V1.


2021 ◽  
Author(s):  
Dylan Barbera ◽  
Nicholas J. Priebe ◽  
Lindsey L. Glickfeld

AbstractSensory neurons not only encode stimuli that align with their receptive fields but are also modulated by context. For example, the responses of neurons in mouse primary visual cortex (V1) to gratings of their preferred orientation are modulated by the presence of superimposed orthogonal gratings (“plaids”). The effects of this modulation can be diverse: some neurons exhibit cross-orientation suppression while other neurons have larger responses to a plaid than its components. We investigated whether these diverse forms of masking could be explained by a unified circuit mechanism. We report that the suppression of cortical activity does not alter the effects of masking, ruling out cortical mechanisms. Instead, we demonstrate that the heterogeneity of plaid responses is explained by an interaction between stimulus geometry and orientation tuning. Highly selective neurons uniformly exhibit cross-orientation suppression, whereas in weakly-selective neurons masking depends on the spatial configuration of the stimulus, with effects transitioning systematically between suppression and facilitation. Thus, the diverse responses of mouse V1 neurons emerge as a consequence of the spatial structure of the afferent input to V1, with no need to invoke cortical interactions.


2000 ◽  
Vol 84 (4) ◽  
pp. 2048-2062 ◽  
Author(s):  
Mitesh K. Kapadia ◽  
Gerald Westheimer ◽  
Charles D. Gilbert

To examine the role of primary visual cortex in visuospatial integration, we studied the spatial arrangement of contextual interactions in the response properties of neurons in primary visual cortex of alert monkeys and in human perception. We found a spatial segregation of opposing contextual interactions. At the level of cortical neurons, excitatory interactions were located along the ends of receptive fields, while inhibitory interactions were strongest along the orthogonal axis. Parallel psychophysical studies in human observers showed opposing contextual interactions surrounding a target line with a similar spatial distribution. The results suggest that V1 neurons can participate in multiple perceptual processes via spatially segregated and functionally distinct components of their receptive fields.


2005 ◽  
Vol 94 (1) ◽  
pp. 788-798 ◽  
Author(s):  
Valerio Mante ◽  
Matteo Carandini

A recent optical imaging study of primary visual cortex (V1) by Basole, White, and Fitzpatrick demonstrated that maps of preferred orientation depend on the choice of stimuli used to measure them. These authors measured population responses expressed as a function of the optimal orientation of long drifting bars. They then varied bar length, direction, and speed and found that stimuli of a same orientation can elicit different population responses and stimuli with different orientation can elicit similar population responses. We asked whether these results can be explained from known properties of V1 receptive fields. We implemented an “energy model” where a receptive field integrates stimulus energy over a region of three-dimensional frequency space. The population of receptive fields defines a volume of visibility, which covers all orientations and a plausible range of spatial and temporal frequencies. This energy model correctly predicts the population response to bars of different length, direction, and speed and explains the observations made with optical imaging. The model also readily explains a related phenomenon, the appearance of motion streaks for fast-moving dots. We conclude that the energy model can be applied to activation maps of V1 and predicts phenomena that may otherwise appear to be surprising. These results indicate that maps obtained with optical imaging reflect the layout of neurons selective for stimulus energy, not for isolated stimulus features such as orientation, direction, and speed.


2018 ◽  
Author(s):  
Adam P. Morris ◽  
Bart Krekelberg

SummaryHumans and other primates rely on eye movements to explore visual scenes and to track moving objects. As a result, the image that is projected onto the retina – and propagated throughout the visual cortical hierarchy – is almost constantly changing and makes little sense without taking into account the momentary direction of gaze. How is this achieved in the visual system? Here we show that in primary visual cortex (V1), the earliest stage of cortical vision, neural representations carry an embedded “eye tracker” that signals the direction of gaze associated with each image. Using chronically implanted multi-electrode arrays, we recorded the activity of neurons in V1 during tasks requiring fast (exploratory) and slow (pursuit) eye movements. Neurons were stimulated with flickering, full-field luminance noise at all times. As in previous studies 1-4, we observed neurons that were sensitive to gaze direction during fixation, despite comparable stimulation of their receptive fields. We trained a decoder to translate neural activity into metric estimates of (stationary) gaze direction. This decoded signal not only tracked the eye accurately during fixation, but also during fast and slow eye movements, even though the decoder had not been exposed to data from these behavioural states. Moreover, this signal lagged the real eye by approximately the time it took for new visual information to travel from the retina to cortex. Using simulations, we show that this V1 eye position signal could be used to take into account the sensory consequences of eye movements and map the fleeting positions of objects on the retina onto their stable position in the world.


Sign in / Sign up

Export Citation Format

Share Document