scholarly journals Information redundancy across spatial scales modulates early visual cortical processing

2021 ◽  
Author(s):  
Kirsten Petras ◽  
Sanne Ten Oever ◽  
Sarang S. Dalal ◽  
Valerie Goffaux

Visual images contain redundant information across spatial scales where low spatial frequency contrast is informative towards the location and likely content of high spatial frequency detail. Previous research suggests that the visual system makes use of those redundancies to facilitate efficient processing. In this framework, a fast, initial analysis of low-spatial frequency (LSF) information guides the slower and later processing of high spatial frequency (HSF) detail. Here, we used multivariate classification as well as time-frequency analysis of MEG responses to the viewing of intact and phase scrambled images of human faces to demonstrate that the availability of redundant LSF information, as found in broadband intact images, correlates with a reduction in HSF representational dominance in both early and higher-level visual areas as well as a reduction of gamma-band power in early visual cortex. Our results indicate that the cross spatial frequency information redundancy that can be found in all natural images might be a driving factor in the efficient integration of fine image details.

NeuroImage ◽  
2021 ◽  
Vol 244 ◽  
pp. 118613
Author(s):  
Kirsten Petras ◽  
Sanne ten Oever ◽  
Sarang S. Dalal ◽  
Valerie Goffaux

Perception ◽  
1992 ◽  
Vol 21 (2) ◽  
pp. 185-193 ◽  
Author(s):  
Geoffrey W Stuart ◽  
Terence R J Bossomaier

Recently it has been reported that the visual cortical cells which are engaged in cooperative coding of global stimulus features, display synchrony in their firing rates when both are stimulated. Alternative models identify global stimulus features with the coarse spatial scales of the image. Versions of the Munsterberg or Café Wall illusions which differ in their low spatial frequency content were used to show that in all cases it was the high spatial frequencies in the image which determined the strength and direction of these illusions. Since cells responsive to high spatial frequencies have small receptive fields, cooperative coding must be involved in the representation of long borders in the image.


2021 ◽  
Vol 2 ◽  
Author(s):  
Arthur Shapiro

Shapiro and Hedjar (2019) proposed a shift in the definition of illusion, from ‘differences between perception and reality’ to ‘conflicts between possible constructions of reality’. This paper builds on this idea by presenting a series of motion hybrid images that juxtapose fine scale contrast (high spatial frequency content) with coarse scale contrast-generated motion (low spatial frequency content). As is the case for static hybrid images, under normal viewing conditions the fine scale contrast determines the perception of motion hybrid images; however, if the motion hybrid image is blurred or viewed from a distance, the perception is determined by the coarse scale contrast. The fine scale contrast therefore masks the perception of motion (and sometimes depth) produced by the coarser scale contrast. Since the unblurred movies contain both fine and coarse scale contrast information, but the blurred movies contain only coarse scale contrast information, cells in the brain that respond to low spatial frequencies should respond equally to both blurred and unblurred movies. Since people undoubtedly differ in the optics of their eyes and most likely in the neural processes that resolve conflict across scales, the paper suggests that motion hybrid images illustrate trade-offs between spatial scales that are important for understanding individual differences in perceptions of the natural world.


2016 ◽  
Vol 3 (1) ◽  
pp. 150523 ◽  
Author(s):  
Roger W. Li ◽  
Truyet T. Tran ◽  
Ashley P. Craven ◽  
Tsz-Wing Leung ◽  
Sandy W. Chat ◽  
...  

Neurons in the early visual cortex are finely tuned to different low-level visual features, forming a multi-channel system analysing the visual image formed on the retina in a parallel manner. However, little is known about the potential ‘cross-talk’ among these channels. Here, we systematically investigated whether stereoacuity, over a large range of target spatial frequencies, can be enhanced by perceptual learning. Using narrow-band visual stimuli, we found that practice with coarse (low spatial frequency) targets substantially improves performance, and that the improvement spreads from coarse to fine (high spatial frequency) three-dimensional perception, generalizing broadly across untrained spatial frequencies and orientations. Notably, we observed an asymmetric transfer of learning across the spatial frequency spectrum. The bandwidth of transfer was broader when training was at a high spatial frequency than at a low spatial frequency. Stereoacuity training is most beneficial when trained with fine targets. This broad transfer of stereoacuity learning contrasts with the highly specific learning reported for other basic visual functions. We also revealed strategies to boost learning outcomes ‘beyond-the-plateau’. Our investigations contribute to understanding the functional properties of the network subserving stereovision. The ability to generalize may provide a key principle for restoring impaired binocular vision in clinical situations.


2021 ◽  
Vol 21 (9) ◽  
pp. 2526
Author(s):  
Kirsten Petras ◽  
Sanne Ten Oever ◽  
Sarang S. Dalal ◽  
Valerie Goffaux

2019 ◽  
Vol 31 (1) ◽  
pp. 49-63 ◽  
Author(s):  
Maryam Vaziri-Pashkam ◽  
JohnMark Taylor ◽  
Yaoda Xu

Primate ventral and dorsal visual pathways both contain visual object representations. Dorsal regions receive more input from magnocellular system while ventral regions receive inputs from both magnocellular and parvocellular systems. Due to potential differences in the spatial sensitivites of manocellular and parvocellular systems, object representations in ventral and dorsal regions may differ in how they represent visual input from different spatial scales. To test this prediction, we asked observers to view blocks of images from six object categories, shown in full spectrum, high spatial frequency (SF), or low SF. We found robust object category decoding in all SF conditions as well as SF decoding in nearly all the early visual, ventral, and dorsal regions examined. Cross-SF decoding further revealed that object category representations in all regions exhibited substantial tolerance across the SF components. No difference between ventral and dorsal regions was found in their preference for the different SF components. Further comparisons revealed that, whereas differences in the SF component separated object category representations in early visual areas, such a separation was much smaller in downstream ventral and dorsal regions. In those regions, variations among the object categories played a more significant role in shaping the visual representational structures. Our findings show that ventral and dorsal regions are similar in how they represent visual input from different spatial scales and argue against a dissociation of these regions based on differential sensitivity to different SFs.


2006 ◽  
Vol 23 (5) ◽  
pp. 729-739 ◽  
Author(s):  
CHRISTOPHE LALANNE ◽  
JEAN LORENCEAU

We report the results of psychophysical experiments with the so-called barber pole stimulus providing new insights on the neuronal processes underlying the analysis of moving features such as terminators or line-endings. In experiment 1, we show that the perceived direction of a barber pole stimulus, induced by line-ending motion, is highly dependent on the spatial frequency and contrast of the grating stimulus: perceived direction is shifted away from the barber pole illusion at high spatial frequency in a contrast dependent way, suggesting that line-ends are not processed at high spatial scales. In subsequent experiments, we use a contrast adaptation paradigm and a masking paradigm in an attempt to assess the spatial structure and location of the receptive fields that process line-endings. We show that the adapting stimulus that weakens most the barber pole illusion is localized within the barber pole stimulus and not at line-endings' locations. Current models of line-endings' motion processing are discussed in the light of these psychophysical results.


2012 ◽  
Vol 108 (5) ◽  
pp. 1228-1243 ◽  
Author(s):  
Amol Gharat ◽  
Curtis L. Baker

From our daily experience, it is very clear that relative motion cues can contribute to correctly identifying object boundaries and perceiving depth. Motion-defined contours are not only generated by the motion of objects in a scene but also by the movement of an observer's head and body (motion parallax). However, the neural mechanism involved in detecting these contours is still unknown. To explore this mechanism, we extracellularly recorded visual responses of area 18 neurons in anesthetized and paralyzed cats. The goal of this study was to determine if motion-defined contours could be detected by neurons that have been previously shown to detect luminance-, texture-, and contrast-defined contours cue invariantly. Motion-defined contour stimuli were generated by modulating the velocity of high spatial frequency sinusoidal luminance gratings (carrier gratings) by a moving squarewave envelope. The carrier gratings were outside the luminance passband of a neuron, such that presence of the carrier alone within the receptive field did not elicit a response. Most neurons that responded to contrast-defined contours also responded to motion-defined contours. The orientation and direction selectivity of these neurons for motion-defined contours was similar to that of luminance gratings. A given neuron also exhibited similar selectivity for the spatial frequency of the carrier gratings of contrast- and motion-defined contours. These results suggest that different second-order contours are detected in a form-cue invariant manner, through a common neural mechanism in area 18.


Perception ◽  
1997 ◽  
Vol 26 (9) ◽  
pp. 1169-1180 ◽  
Author(s):  
Denis M Parker ◽  
J Roly Lishman ◽  
Jim Hughes

In two experiments low-pass and high-pass spatially filtered versions of a base image were prepared and the effect of the order of delivery of sequences of filtered and base images investigated. A task that required subjects to discriminate 120 ms presentations of a full-bandwidth base image and degraded sequences that contained sets of three different spatially filtered versions, or mixtures of spatially filtered and full-bandwidth versions of the image, were used. Each set of images used in the degraded sequences was presented either so that within the 120 ms presentation window the spatial content swept from low to high spatial frequencies or from high to low. In experiment 1 twenty subjects discriminated between a base image and degraded sequences of an urban scene. Results showed both a significant overall effect of image order, with low-to-high spatial-frequency information delivery being mistaken more often for the full-bandwidth presentation than high-to-low, and that different sets of degraded image sequences varied significantly in the frequency with which they were mistaken for the full-bandwidth presentation. In experiment 2 a base and filtered versions of a human face were used in an identical task with twenty different subjects and a very similar pattern of significant results was obtained, although imposed on a lower overall error frequency than that obtained in experiment 1. It was concluded that the results of both experiments provide evidence for an anisotropic temporospatial integration mechanism in which availability of spatial information in a low-to-high spatial-frequency sequence results in more efficient integration than a high-to-low.


Sign in / Sign up

Export Citation Format

Share Document