scholarly journals Presaccadic motion integration between current and future retinotopic locations of attended objects

2016 ◽  
Vol 116 (4) ◽  
pp. 1592-1602 ◽  
Author(s):  
Martin Szinte ◽  
Donatas Jonikaitis ◽  
Martin Rolfs ◽  
Patrick Cavanagh ◽  
Heiner Deubel

Object tracking across eye movements is thought to rely on presaccadic updating of attention between the object's current and its “remapped” location (i.e., the postsaccadic retinotopic location). We report evidence for a bifocal, presaccadic sampling between these two positions. While preparing a saccade, participants viewed four spatially separated random dot kinematograms, one of which was cued by a colored flash. They reported the direction of a coherent motion signal at the cued location while a second signal occurred simultaneously either at the cue's remapped location or at one of several control locations. Motion integration between the signals occurred only when the two motion signals were congruent and were shown at the cue and at its remapped location. This shows that the visual system integrates features between both the current and the future retinotopic locations of an attended object and that such presaccadic sampling is feature specific.

2019 ◽  
Vol 121 (5) ◽  
pp. 1787-1797
Author(s):  
David Souto ◽  
Jayesha Chudasama ◽  
Dirk Kerzel ◽  
Alan Johnston

Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world’s global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. NEW & NOTEWORTHY This study provides a new understanding of how the visual system achieves coherent perception of an object’s motion while the eyes themselves are moving. The visual system integrates local motion measurements to create a coherent percept of object motion. An analysis of perceptual judgments and reflexive eye movements to a brief change in an object’s global motion confirms that the visual and oculomotor systems pick fewer samples to extract global motion opposite to the eye movement.


2015 ◽  
Vol 113 (7) ◽  
pp. 2220-2231 ◽  
Author(s):  
Martin Szinte ◽  
Marisa Carrasco ◽  
Patrick Cavanagh ◽  
Martin Rolfs

In many situations like playing sports or driving a car, we keep track of moving objects, despite the frequent eye movements that drastically interrupt their retinal motion trajectory. Here we report evidence that transsaccadic tracking relies on trade-offs of attentional resources from a tracked object's motion path to its remapped location. While participants covertly tracked a moving object, we presented pulses of coherent motion at different locations to probe the allocation of spatial attention along the object's entire motion path. Changes in the sensitivity for these pulses showed that during fixation attention shifted smoothly in anticipation of the tracked object's displacement. However, just before a saccade, attentional resources were withdrawn from the object's current motion path and reflexively drawn to the retinal location the object would have after saccade. This finding demonstrates the predictive choice the visual system makes to maintain the tracking of moving objects across saccades.


Author(s):  
Christian Wolf ◽  
Markus Lappe

AbstractHumans and other primates are equipped with a foveated visual system. As a consequence, we reorient our fovea to objects and targets in the visual field that are conspicuous or that we consider relevant or worth looking at. These reorientations are achieved by means of saccadic eye movements. Where we saccade to depends on various low-level factors such as a targets’ luminance but also crucially on high-level factors like the expected reward or a targets’ relevance for perception and subsequent behavior. Here, we review recent findings how the control of saccadic eye movements is influenced by higher-level cognitive processes. We first describe the pathways by which cognitive contributions can influence the neural oculomotor circuit. Second, we summarize what saccade parameters reveal about cognitive mechanisms, particularly saccade latencies, saccade kinematics and changes in saccade gain. Finally, we review findings on what renders a saccade target valuable, as reflected in oculomotor behavior. We emphasize that foveal vision of the target after the saccade can constitute an internal reward for the visual system and that this is reflected in oculomotor dynamics that serve to quickly and accurately provide detailed foveal vision of relevant targets in the visual field.


2013 ◽  
Vol 368 (1628) ◽  
pp. 20130056 ◽  
Author(s):  
Matteo Toscani ◽  
Matteo Valsecchi ◽  
Karl R. Gegenfurtner

When judging the lightness of objects, the visual system has to take into account many factors such as shading, scene geometry, occlusions or transparency. The problem then is to estimate global lightness based on a number of local samples that differ in luminance. Here, we show that eye fixations play a prominent role in this selection process. We explored a special case of transparency for which the visual system separates surface reflectance from interfering conditions to generate a layered image representation. Eye movements were recorded while the observers matched the lightness of the layered stimulus. We found that observers did focus their fixations on the target layer, and this sampling strategy affected their lightness perception. The effect of image segmentation on perceived lightness was highly correlated with the fixation strategy and was strongly affected when we manipulated it using a gaze-contingent display. Finally, we disrupted the segmentation process showing that it causally drives the selection strategy. Selection through eye fixations can so serve as a simple heuristic to estimate the target reflectance.


2006 ◽  
Vol 96 (6) ◽  
pp. 3545-3550 ◽  
Author(s):  
Anna Montagnini ◽  
Miriam Spering ◽  
Guillaume S. Masson

Smooth pursuit eye movements reflect the temporal dynamics of bidimensional (2D) visual motion integration. When tracking a single, tilted line, initial pursuit direction is biased toward unidimensional (1D) edge motion signals, which are orthogonal to the line orientation. Over 200 ms, tracking direction is slowly corrected to finally match the 2D object motion during steady-state pursuit. We now show that repetition of line orientation and/or motion direction does not eliminate the transient tracking direction error nor change the time course of pursuit correction. Nonetheless, multiple successive presentations of a single orientation/direction condition elicit robust anticipatory pursuit eye movements that always go in the 2D object motion direction not the 1D edge motion direction. These results demonstrate that predictive signals about target motion cannot be used for an efficient integration of ambiguous velocity signals at pursuit initiation.


2010 ◽  
Vol 7 (9) ◽  
pp. 999-999 ◽  
Author(s):  
H. M. Fehd ◽  
A. E. Seiffert

2010 ◽  
Vol 18 (9) ◽  
pp. 1368-1391 ◽  
Author(s):  
Markus Huff ◽  
Frank Papenmeier ◽  
Georg Jahn ◽  
Friedrich W. Hesse

Author(s):  
Elisabeth Hein

The Ternus effect refers to an ambiguous apparent motion display in which two or three elements presented in succession and shifted horizontally by one position can be perceived as either a group of elements moving together or as one element jumping across the other(s). This chapter introduces the phenomenon and describes observations made by Pikler and Ternus in the beginning of the twentieth century. Next, reasons for continued interest in the Ternus effect are discussed and an overview of factors that influence it offered, including low-level image-based factors, for example luminance, as well as higher-level scene-based factors, for example perceptual grouping. The chapter ends with a discussion of theories regarding the mechanisms underlying the Ternus effect, providing insight into how the visual system is able to perceive coherent objects in the world despite discontinuities in the input (e.g., as a consequence of eye movements or object occlusion).


2019 ◽  
Vol 116 (6) ◽  
pp. 2027-2032 ◽  
Author(s):  
Jasper H. Fabius ◽  
Alessio Fracasso ◽  
Tanja C. W. Nijboer ◽  
Stefan Van der Stigchel

Humans move their eyes several times per second, yet we perceive the outside world as continuous despite the sudden disruptions created by each eye movement. To date, the mechanism that the brain employs to achieve visual continuity across eye movements remains unclear. While it has been proposed that the oculomotor system quickly updates and informs the visual system about the upcoming eye movement, behavioral studies investigating the time course of this updating suggest the involvement of a slow mechanism, estimated to take more than 500 ms to operate effectively. This is a surprisingly slow estimate, because both the visual system and the oculomotor system process information faster. If spatiotopic updating is indeed this slow, it cannot contribute to perceptual continuity, because it is outside the temporal regime of typical oculomotor behavior. Here, we argue that the behavioral paradigms that have been used previously are suboptimal to measure the speed of spatiotopic updating. In this study, we used a fast gaze-contingent paradigm, using high phi as a continuous stimulus across eye movements. We observed fast spatiotopic updating within 150 ms after stimulus onset. The results suggest the involvement of a fast updating mechanism that predictively influences visual perception after an eye movement. The temporal characteristics of this mechanism are compatible with the rate at which saccadic eye movements are typically observed in natural viewing.


Sign in / Sign up

Export Citation Format

Share Document