scholarly journals Asymmetries around the visual field: From retina to cortex to behavior

2022 ◽  
Vol 18 (1) ◽  
pp. e1009771
Author(s):  
Eline R. Kupers ◽  
Noah C. Benson ◽  
Marisa Carrasco ◽  
Jonathan Winawer

Visual performance varies around the visual field. It is best near the fovea compared to the periphery, and at iso-eccentric locations it is best on the horizontal, intermediate on the lower, and poorest on the upper meridian. The fovea-to-periphery performance decline is linked to the decreases in cone density, retinal ganglion cell (RGC) density, and V1 cortical magnification factor (CMF) as eccentricity increases. The origins of polar angle asymmetries are not well understood. Optical quality and cone density vary across the retina, but recent computational modeling has shown that these factors can only account for a small percentage of behavior. Here, we investigate how visual processing beyond the cone photon absorptions contributes to polar angle asymmetries in performance. First, we quantify the extent of asymmetries in cone density, midget RGC density, and V1 CMF. We find that both polar angle asymmetries and eccentricity gradients increase from cones to mRGCs, and from mRGCs to cortex. Second, we extend our previously published computational observer model to quantify the contribution of phototransduction by the cones and spatial filtering by mRGCs to behavioral asymmetries. Starting with photons emitted by a visual display, the model simulates the effect of human optics, cone isomerizations, phototransduction, and mRGC spatial filtering. The model performs a forced choice orientation discrimination task on mRGC responses using a linear support vector machine classifier. The model shows that asymmetries in a decision maker’s performance across polar angle are greater when assessing the photocurrents than when assessing isomerizations and are greater still when assessing mRGC signals. Nonetheless, the polar angle asymmetries of the mRGC outputs are still considerably smaller than those observed from human performance. We conclude that cone isomerizations, phototransduction, and the spatial filtering properties of mRGCs contribute to polar angle performance differences, but that a full account of these differences will entail additional contribution from cortical representations.

2020 ◽  
Author(s):  
Eline R. Kupers ◽  
Noah C. Benson ◽  
Marisa Carrasco ◽  
Jonathan Winawer

AbstractVisual performance varies around the visual field. It is best near the fovea compared to the periphery, and at iso-eccentric locations it is best on the horizontal, intermediate on the lower, and poorest on the upper meridian. The fovea-to-periphery performance decline is linked to the decreases in cone density, retinal ganglion cell (RGC) density, and V1 cortical magnification factor (CMF) as eccentricity increases. The origins of radial asymmetries are not well understood. Optical quality and cone density vary across the retina, but recent computational modeling has shown that these factors can only account for a small percentage of behavior. Here, we investigate how visual processing beyond the cones contributes to radial asymmetries in performance. First, we quantify the extent of asymmetries in cone density, midget RGC density, and V1-V2 CMF. We find that both radial asymmetries and eccentricity gradients are amplified from cones to mRGCs, and from mRGCs to cortex. Second, we extend our previously published computational observer model to quantify the contribution of spatial filtering by mRGCs to behavioral asymmetries. Starting with photons emitted by a visual display, the model simulates the effect of human optics, fixational eye movements, cone isomerizations and mRGC spatial filtering. The model performs a forced choice orientation discrimination task on mRGC responses using a linear support vector machine classifier. The model shows radial asymmetries in performance that are larger than those from a model operating on the cone outputs, but considerably smaller than those observed from human performance. We conclude that spatial filtering properties of mRGCs contribute to radial performance differences, but that a full account of these differences will entail a large contribution from cortical representations.


2018 ◽  
Author(s):  
Eline R. Kupers ◽  
Marisa Carrasco ◽  
Jonathan Winawer

AbstractVisual performance depends on polar angle, even when eccentricity is held constant; on many psychophysical tasks observers perform best when stimuli are presented on the horizontal meridian, worst on the upper vertical, and intermediate on the lower vertical meridian. This variation in performance ‘around’ the visual field can be as pronounced as that of doubling the stimulus eccentricity. The causes of these asymmetries in performance are largely unknown. Some factors in the eye, e.g. cone density, are positively correlated with the reported variations in visual performance with polar angle. However, the question remains whether such correlations can quantitatively explain the perceptual differences observed ‘around’ the visual field. To investigate the extent to which the earliest stages of vision –optical quality and cone density- contribute to performance differences with polar angle, we created a computational observer model. The model uses the open-source software package ISETBIO to simulate an orientation discrimination task for which visual performance differs with polar angle. The model starts from the photons emitted by a display, which pass through simulated human optics with fixational eye movements, followed by cone isomerizations in the retina. Finally, we classify stimulus orientation using a support vector machine to learn a linear classifier on the photon absorptions. To account for the 30% increase in contrast thresholds for upper vertical compared to horizontal meridian, as observed psychophysically on the same task, our computational observer model would require either an increase of ~7 diopters of defocus or a reduction of 500% in cone density. These values far exceed the actual variations as a function of polar angle observed in human eyes. Therefore, we conclude that these factors in the eye only account for a small fraction of differences in visual performance with polar angle. Substantial additional asymmetries must arise in later retinal and/or cortical processing.Author SummaryA fundamental goal in computational neuroscience is to link known facts from biology with behavior. Here, we considered visual behavior, specifically the fact that people are better at visual tasks performed to the left or right of the center of gaze, compared to above or below at the same distance from gaze. We sought to understand what aspects of biology govern this fundamental pattern in visual behavior. To do so, we implemented a computational observer model that incorporates known facts about the front end of the human visual system, including optics, eye movements, and the photoreceptor array in the retina. We found that even though some of these properties are correlated with performance, they fall far short of quantitatively explaining it. We conclude that later stages of processing in the nervous system greatly amplify small differences in the way the eye samples the visual world, resulting in strikingly different performance around the visual field.


2020 ◽  
Author(s):  
Antoine Barbot ◽  
Shutian Xue ◽  
Marisa Carrasco

Human vision is heterogeneous around the visual field. At a fixed eccentricity, performance is better along the horizontal than the vertical meridian, and along the lower than the upper vertical meridian. These asymmetric patterns, termed performance fields, have been found in numerous visual tasks, including those mediated by contrast sensitivity and spatial resolution. However, it is unknown whether spatial resolution asymmetries are confined to the cardinal meridians or whether, and how far, they extend into the upper and lower hemifields. Here, we measured visual acuity at isoeccentric peripheral locations (10 deg eccentricity), every 15º of polar angle. On each trial, observers judged the orientation (±45º) of one out of four equidistant, suprathreshold grating stimuli varying in spatial frequency (SF). On each block, we measured performance as a function of stimulus SF at 4 out of 24 isoeccentric locations. We estimated the 75%-correct SF threshold, SF cutoff point (i.e., chance-level) and slope of the psychometric function for each location. We found higher SF estimates –i.e., better acuity– for the horizontal than the vertical meridian, and for the lower than the upper vertical meridian. These asymmetries were most pronounced at the cardinal meridians and decreased gradually as the angular distance from the vertical meridian increased. This gradual change in acuity with polar angle reflected a shift of the psychometric function without changes in slope. The same pattern was found under binocular and monocular viewing conditions. These findings advance our understanding of visual processing around the visual field and help constrain models of visual perception.


2020 ◽  
Author(s):  
Simran Purokayastha ◽  
Mariel Roberts ◽  
Marisa Carrasco

Performance as a function of polar angle at isoeccentric locations across the visual field is known as a performance field (PF) and is characterized by two asymmetries: the HVA (Horizontal-Vertical Anisotropy) and VMA (Vertical Meridian Asymmetry). Exogenous (involuntary) spatial attention does not affect the shape of the PF, improving performance similarly across polar angle. Here we investigated whether endogenous (voluntary) spatial attention, a flexible mechanism, can attenuate these perceptual asymmetries. Twenty participants performed an orientation discrimination task while their endogenous attention was either directed to the target location or distributed across all possible locations. The effects of attention were assessed either using the same stimulus contrast across locations, or equating difficulty across locations using individually-titrated contrast thresholds. In both experiments, endogenous attention similarly improved performance at all locations, maintaining the canonical PF shape. Thus, despite its voluntary nature, like exogenous attention, endogenous attention cannot alleviate perceptual asymmetries at isoeccentric locations.


2020 ◽  
Author(s):  
Luiza Kirasirova ◽  
Vladimir Bulanov ◽  
Alexei Ossadtchi ◽  
Alexander Kolsanov ◽  
Vasily Pyatin ◽  
...  

AbstractA P300 brain-computer interface (BCI) is a paradigm, where text characters are decoded from visual evoked potentials (VEPs). In a popular implementation, called P300 speller, a subject looks at a display where characters are flashing and selects one character by attending to it. The selection is recognized by the strongest VEP. The speller performs well when cortical responses to target and non-target stimuli are sufficiently different. Although many strategies have been proposed for improving the spelling, a relatively simple one received insufficient attention in the literature: reduction of the visual field to diminish the contribution from non-target stimuli. Previously, this idea was implemented in a single-stimulus switch that issued an urgent command. To tackle this approach further, we ran a pilot experiment where ten subjects first operated a traditional P300 speller and then wore a binocular aperture that confined their sight to the central visual field. Visual field restriction resulted in a reduction of non-target responses in all subjects. Moreover, in four subjects, target-related VEPs became more distinct. We suggest that this approach could speed up BCI operations and reduce user fatigue. Additionally, instead of wearing an aperture, non-targets could be removed algorithmically or with a hybrid interface that utilizes an eye tracker. We further discuss how a P300 speller could be improved by taking advantage of the different physiological properties of the central and peripheral vision. Finally, we suggest that the proposed experimental approach could be used in basic research on the mechanisms of visual processing.


2020 ◽  
Author(s):  
Zixuan Wang ◽  
Yuki Murai ◽  
David Whitney

AbstractPerceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.


Perception ◽  
10.1068/p3393 ◽  
2003 ◽  
Vol 32 (4) ◽  
pp. 395-414 ◽  
Author(s):  
Marina V Danilova ◽  
John D Mollon

The visual system is known to contain hard-wired mechanisms that compare the values of a given stimulus attribute at adjacent positions in the visual field; but how are comparisons performed when the stimuli are not adjacent? We ask empirically how well a human observer can compare two stimuli that are separated in the visual field. For the stimulus attributes of spatial frequency, contrast, and orientation, we have measured discrimination thresholds as a function of the spatial separation of the discriminanda. The three attributes were studied in separate experiments, but in all cases the target stimuli were briefly presented Gabor patches. The Gabor patches lay on an imaginary circle, which was centred on the fixation point and had a radius of 5 deg of visual angle. Our psychophysical procedures were designed to ensure that the subject actively compared the two stimuli on each presentation, rather than referring just one stimulus to a stored template or criterion. For the cases of spatial frequency and contrast, there was no systematic effect of spatial separation up to 10 deg. We conclude that the subject's judgment does not depend on discontinuity detectors in the early visual system but on more central codes that represent the two stimuli individually. In the case of orientation discrimination, two naïve subjects performed as in the cases of spatial frequency and contrast; but two highly trained subjects showed a systematic increase of threshold with spatial separation, suggesting that they were exploiting a distal mechanism designed to detect the parallelism or non-parallelism of contours.


2019 ◽  
Vol 11 (12) ◽  
pp. 1405 ◽  
Author(s):  
Razika Bazine ◽  
Huayi Wu ◽  
Kamel Boukhechba

In this article, we propose two effective frameworks for hyperspectral imagery classification based on spatial filtering in Discrete Cosine Transform (DCT) domain. In the proposed approaches, spectral DCT is performed on the hyperspectral image to obtain a spectral profile representation, where the most significant information in the transform domain is concentrated in a few low-frequency components. The high-frequency components that generally represent noisy data are further processed using a spatial filter to extract the remaining useful information. For the spatial filtering step, both two-dimensional DCT (2D-DCT) and two-dimensional adaptive Wiener filter (2D-AWF) are explored. After performing the spatial filter, an inverse spectral DCT is applied on all transformed bands including the filtered bands to obtain the final preprocessed hyperspectral data, which is subsequently fed into a linear Support Vector Machine (SVM) classifier. Experimental results using three hyperspectral datasets show that the proposed framework Cascade Spectral DCT Spatial Wiener Filter (CDCT-WF_SVM) outperforms several state-of-the-art methods in terms of classification accuracy, the sensitivity regarding different sizes of the training samples, and computational time.


2012 ◽  
Vol 12 (4) ◽  
pp. 813-825 ◽  
Author(s):  
Frank E. Garcea ◽  
Jorge Almeida ◽  
Bradford Z. Mahon

2020 ◽  
Vol 287 (1930) ◽  
pp. 20200825
Author(s):  
Zixuan Wang ◽  
Yuki Murai ◽  
David Whitney

Perceiving the positions of objects is a prerequisite for most other visual and visuomotor functions, but human perception of object position varies from one individual to the next. The source of these individual differences in perceived position and their perceptual consequences are unknown. Here, we tested whether idiosyncratic biases in the underlying representation of visual space propagate across different levels of visual processing. In Experiment 1, using a position matching task, we found stable, observer-specific compressions and expansions within local regions throughout the visual field. We then measured Vernier acuity (Experiment 2) and perceived size of objects (Experiment 3) across the visual field and found that individualized spatial distortions were closely associated with variations in both visual acuity and apparent object size. Our results reveal idiosyncratic biases in perceived position and size, originating from a heterogeneous spatial resolution that carries across the visual hierarchy.


Sign in / Sign up

Export Citation Format

Share Document