scholarly journals Visual space curves before eye movements

2021 ◽  
Author(s):  
Ifedayo-Emmmanuel Adeyefa-Olasupo

Despite the incessant retinal disruptions that necessarily accompany eye movements, our percept of the visual world remains continuous and stable—a phenomenon referred to as spatial constancy. How the visual system achieves spatial constancy remains unclear despite almost four centuries worth of experimentation. Here I measured visual sensitivity at geometrically symmetric locations, observing transient sensitivity differences between them where none should be observed if cells that support spatial constancy indeed faithfully translate or converge. These differences, recapitulated by a novel neurobiological mechanical model, reflect an overriding influence of putative visually transient error signals that curve visual space. Intermediate eccentric locations likely to contain retinal disruptions are uniquely affected by curved visual space, suggesting that visual processing at these locations is transiently turned off before an eye movement, and with the gating off of these error signals, turned back on after an eye-movement— a possible mechanism underlying spatial constancy.

2018 ◽  
Vol 119 (6) ◽  
pp. 2059-2067 ◽  
Author(s):  
Chris Scholes ◽  
Paul V. McGraw ◽  
Neil W. Roach

During periods of steady fixation, we make small-amplitude ocular movements, termed microsaccades, at a rate of 1–2 every second. Early studies provided evidence that visual sensitivity is reduced during microsaccades—akin to the well-established suppression associated with larger saccades. However, the results of more recent work suggest that microsaccades may alter retinal input in a manner that enhances visual sensitivity to some stimuli. Here we parametrically varied the spatial frequency of a stimulus during a detection task and tracked contrast sensitivity as a function of time relative to microsaccades. Our data reveal two distinct modulations of sensitivity: suppression during the eye movement itself and facilitation after the eye has stopped moving. The magnitude of suppression and facilitation of visual sensitivity is related to the spatial content of the stimulus: suppression is greatest for low spatial frequencies, while sensitivity is enhanced most for stimuli of 1–2 cycles/°, spatial frequencies at which we are already most sensitive in the absence of eye movements. We present a model in which the tuning of suppression and facilitation is explained by delayed lateral inhibition between spatial frequency channels. Our data show that eye movements actively modulate visual sensitivity even during fixation: the detectability of images at different spatial scales can be increased or decreased depending on when the image occurs relative to a microsaccade. NEW & NOTEWORTHY Given the frequency with which we make microsaccades during periods of fixation, it is vital that we understand how they affect visual processing. We demonstrate two selective modulations of contrast sensitivity that are time-locked to the occurrence of a microsaccade: suppression of low spatial frequencies during each eye movement and enhancement of higher spatial frequencies after the eye has stopped moving. These complementary changes may arise naturally because of sluggish gain control between spatial channels.


2019 ◽  
Author(s):  
Saad Idrees ◽  
Matthias P. Baumann ◽  
Felix Franke ◽  
Thomas A. Münch ◽  
Ziad M. Hafed

AbstractVisual sensitivity, probed through perceptual detectability of very brief visual stimuli, is strongly impaired around the time of rapid eye movements. This robust perceptual phenomenon, called saccadic suppression, is frequently attributed to active suppressive signals that are directly derived from eye movement commands. Here we show instead that visual-only mechanisms, activated by saccade-induced image shifts, can account for all perceptual properties of saccadic suppression that we have investigated. Such mechanisms start at, but are not necessarily exclusive to, the very first stage of visual processing in the brain, the retina. Critically, neural suppression originating in the retina outlasts perceptual suppression around the time of saccades, suggesting that extra-retinal movement-related signals, rather than causing suppression, may instead act to shorten it. Our results demonstrate a far-reaching contribution of visual processing mechanisms to perceptual saccadic suppression, starting in the retina, without the need to invoke explicit motor-based suppression commands.


2019 ◽  
Vol 121 (2) ◽  
pp. 646-661 ◽  
Author(s):  
Marie E. Bellet ◽  
Joachim Bellet ◽  
Hendrikje Nienborg ◽  
Ziad M. Hafed ◽  
Philipp Berens

Saccades are ballistic eye movements that rapidly shift gaze from one location of visual space to another. Detecting saccades in eye movement recordings is important not only for studying the neural mechanisms underlying sensory, motor, and cognitive processes, but also as a clinical and diagnostic tool. However, automatically detecting saccades can be difficult, particularly when such saccades are generated in coordination with other tracking eye movements, like smooth pursuits, or when the saccade amplitude is close to eye tracker noise levels, like with microsaccades. In such cases, labeling by human experts is required, but this is a tedious task prone to variability and error. We developed a convolutional neural network to automatically detect saccades at human-level accuracy and with minimal training examples. Our algorithm surpasses state of the art according to common performance metrics and could facilitate studies of neurophysiological processes underlying saccade generation and visual processing. NEW & NOTEWORTHY Detecting saccades in eye movement recordings can be a difficult task, but it is a necessary first step in many applications. We present a convolutional neural network that can automatically identify saccades with human-level accuracy and with minimal training examples. We show that our algorithm performs better than other available algorithms, by comparing performance on a wide range of data sets. We offer an open-source implementation of the algorithm as well as a web service.


2019 ◽  
Vol 116 (6) ◽  
pp. 2027-2032 ◽  
Author(s):  
Jasper H. Fabius ◽  
Alessio Fracasso ◽  
Tanja C. W. Nijboer ◽  
Stefan Van der Stigchel

Humans move their eyes several times per second, yet we perceive the outside world as continuous despite the sudden disruptions created by each eye movement. To date, the mechanism that the brain employs to achieve visual continuity across eye movements remains unclear. While it has been proposed that the oculomotor system quickly updates and informs the visual system about the upcoming eye movement, behavioral studies investigating the time course of this updating suggest the involvement of a slow mechanism, estimated to take more than 500 ms to operate effectively. This is a surprisingly slow estimate, because both the visual system and the oculomotor system process information faster. If spatiotopic updating is indeed this slow, it cannot contribute to perceptual continuity, because it is outside the temporal regime of typical oculomotor behavior. Here, we argue that the behavioral paradigms that have been used previously are suboptimal to measure the speed of spatiotopic updating. In this study, we used a fast gaze-contingent paradigm, using high phi as a continuous stimulus across eye movements. We observed fast spatiotopic updating within 150 ms after stimulus onset. The results suggest the involvement of a fast updating mechanism that predictively influences visual perception after an eye movement. The temporal characteristics of this mechanism are compatible with the rate at which saccadic eye movements are typically observed in natural viewing.


2017 ◽  
Vol 117 (2) ◽  
pp. 492-508 ◽  
Author(s):  
James E. Niemeyer ◽  
Michael A. Paradiso

Contrast sensitivity is fundamental to natural visual processing and an important tool for characterizing both visual function and clinical disorders. We simultaneously measured contrast sensitivity and neural contrast response functions and compared measurements in common laboratory conditions with naturalistic conditions. In typical experiments, a subject holds fixation and a stimulus is flashed on, whereas in natural vision, saccades bring stimuli into view. Motivated by our previous V1 findings, we tested the hypothesis that perceptual contrast sensitivity is lower in natural vision and that this effect is associated with corresponding changes in V1 activity. We found that contrast sensitivity and V1 activity are correlated and that the relationship is similar in laboratory and naturalistic paradigms. However, in the more natural situation, contrast sensitivity is reduced up to 25% compared with that in a standard fixation paradigm, particularly at lower spatial frequencies, and this effect correlates with significant reductions in V1 responses. Our data suggest that these reductions in natural vision result from fast adaptation on one fixation that lowers the response on a subsequent fixation. This is the first demonstration of rapid, natural-image adaptation that carries across saccades, a process that appears to constantly influence visual sensitivity in natural vision. NEW & NOTEWORTHY Visual sensitivity and activity in brain area V1 were studied in a paradigm that included saccadic eye movements and natural visual input. V1 responses and contrast sensitivity were significantly reduced compared with results in common laboratory paradigms. The parallel neural and perceptual effects of eye movements and stimulus complexity appear to be due to a form of rapid adaptation that carries across saccades.


2020 ◽  
Author(s):  
Doris Voina ◽  
Stefano Recanatesi ◽  
Brian Hu ◽  
Eric Shea-Brown ◽  
Stefan Mihalas

AbstractAs animals adapt to their environments, their brains are tasked with processing stimuli in different sensory contexts. Whether these computations are context dependent or independent, they are all implemented in the same neural tissue. A crucial question is what neural architectures can respond flexibly to a range of stimulus conditions and switch between them. This is a particular case of flexible architecture that permits multiple related computations within a single circuit.Here, we address this question in the specific case of the visual system circuitry, focusing on context integration, defined as the integration of feedforward and surround information across visual space. We show that a biologically inspired microcircuit with multiple inhibitory cell types can switch between visual processing of the static context and the moving context. In our model, the VIP population acts as the switch and modulates the visual circuit through a disinhibitory motif. Moreover, the VIP population is efficient, requiring only a relatively small number of neurons to switch contexts. This circuit eliminates noise in videos by using appropriate lateral connections for contextual spatio-temporal surround modulation, having superior denoising performance compared to circuits where only one context is learned. Our findings shed light on a minimally complex architecture that is capable of switching between two naturalistic contexts using few switching units.Author SummaryThe brain processes information at all times and much of that information is context-dependent. The visual system presents an important example: processing is ongoing, but the context changes dramatically when an animal is still vs. running. How is context-dependent information processing achieved? We take inspiration from recent neurophysiology studies on the role of distinct cell types in primary visual cortex (V1).We find that relatively few “switching units” — akin to the VIP neuron type in V1 in that they turn on and off in the running vs. still context and have connections to and from the main population — is sufficient to drive context dependent image processing. We demonstrate this in a model of feature integration, and in a test of image denoising. The underlying circuit architecture illustrates a concrete computational role for the multiple cell types under increasing study across the brain, and may inspire more flexible neurally inspired computing architectures.


Author(s):  
Fiona Mulvey

This chapter introduces the basics of eye anatomy, eye movements and vision. It will explain the concepts behind human vision sufficiently for the reader to understand later chapters in the book on human perception and attention, and their relationship to (and potential measurement with) eye movements. We will first describe the path of light from the environment through the structures of the eye and on to the brain, as an introduction to the physiology of vision. We will then describe the image registered by the eye, and the types of movements the eye makes in order to perceive the environment as a cogent whole. This chapter explains how eye movements can be thought of as the interface between the visual world and the brain, and why eye movement data can be analysed not only in terms of the environment, or what is looked at, but also in terms of the brain, or subjective cognitive and emotional states. These two aspects broadly define the scope and applicability of eye movements technology in research and in human computer interaction in later sections of the book.


2007 ◽  
Vol 98 (5) ◽  
pp. 2765-2778 ◽  
Author(s):  
S.F.W. Neggers ◽  
W. Huijbers ◽  
C. M. Vrijlandt ◽  
B.N.S. Vlaskamp ◽  
D.J.L.G. Schutter ◽  
...  

While preparing a saccadic eye movement, visual processing of the saccade goal is prioritized. Here, we provide evidence that the frontal eye fields (FEFs) are responsible for this coupling between eye movements and shifts of visuospatial attention. Functional magnetic resonance imaging (fMRI)–guided transcranial magnetic stimulation (TMS) was applied to the FEFs 30 ms before a discrimination target was presented at or next to the target of a saccade in preparation. Results showed that the well-known enhancement of discrimination performance on locations to which eye movements are being prepared was diminished by TMS contralateral to eye movement direction. Based on the present and other reports, we propose that saccade preparatory processes in the FEF affect selective visual processing within the visual cortex through feedback projections, in that way coupling saccade preparation and visuospatial attention.


2018 ◽  
Author(s):  
Marie E. Bellet ◽  
Joachim Bellet ◽  
Hendrikje Nienborg ◽  
Ziad M. Hafed ◽  
Philipp Berens

Saccades are ballistic eye movements that rapidly shift gaze from one location of visual space to another. Detecting saccades in eye movement recordings is important not only for studying the neural mechanisms underlying sensory, motor, and cognitive processes, but also as a clinical and diagnostic tool. However, automatically detecting saccades can be difficult, particularly when such saccades are generated in coordination with other tracking eye movements, like smooth pursuits, or when the saccade amplitude is close to eye tracker noise levels, like with microsaccades. In such cases, labeling by human experts is required, but this is a tedious task prone to variability and error. We developed a convolutional neural network (CNN) to automatically detect saccades at human-level performance accuracy. Our algorithm surpasses state of the art according to common performance metrics, and will facilitate studies of neurophysiological processes underlying saccade generation and visual processing.


2021 ◽  
Author(s):  
Sunwoo Kwon ◽  
Krystel R. Huxlin ◽  
Jude F. Mitchell

AbstractVisual pathways that guide actions do not necessarily mediate conscious perception. Patients with primary visual cortex (V1) damage lose conscious perception but often retain unconscious abilities (e.g. blindsight). Here, we asked if saccade accuracy and post-saccadic following responses (PFRs) that automatically track target motion upon saccade landing are retained when conscious perception is lost. We contrasted these behaviors in the blind and intact fields of 8 chronic V1-stroke patients, and in 8 visually-intact controls. Saccade accuracy was relatively normal in all cases. Stroke patients also had normal PFR in their intact fields, but no PFR in their blind fields. Thus, V1 damage did not spare the unconscious visual processing necessary for automatic, post-saccadic smooth eye movements. Importantly, visual training that recovered motion perception in the blind field did not restore the PFR, suggesting a clear dissociation between pathways mediating perceptual restoration and automatic actions in the V1-damaged visual system.


Sign in / Sign up

Export Citation Format

Share Document