scholarly journals Attentional trade-offs maintain the tracking of moving objects across saccades

2015 ◽  
Vol 113 (7) ◽  
pp. 2220-2231 ◽  
Author(s):  
Martin Szinte ◽  
Marisa Carrasco ◽  
Patrick Cavanagh ◽  
Martin Rolfs

In many situations like playing sports or driving a car, we keep track of moving objects, despite the frequent eye movements that drastically interrupt their retinal motion trajectory. Here we report evidence that transsaccadic tracking relies on trade-offs of attentional resources from a tracked object's motion path to its remapped location. While participants covertly tracked a moving object, we presented pulses of coherent motion at different locations to probe the allocation of spatial attention along the object's entire motion path. Changes in the sensitivity for these pulses showed that during fixation attention shifted smoothly in anticipation of the tracked object's displacement. However, just before a saccade, attentional resources were withdrawn from the object's current motion path and reflexively drawn to the retinal location the object would have after saccade. This finding demonstrates the predictive choice the visual system makes to maintain the tracking of moving objects across saccades.

2013 ◽  
Vol 26 (3) ◽  
pp. 241-265
Author(s):  
Takehiro Nagai ◽  
Hiroto Kimura ◽  
Shigeki Nakauchi

In contrast to the classical view that fundamental visual attributes such as color and motion are independently processed in the visual system (e.g. Livingstone and Hubel, 1987; Marr, 1982), recent studies have revealed various forms of cross-attribute interactions, such as averaging of color appearance along the motion trajectory of an object (Nishida et al., 2007). In this study, we investigated whether such color signal integration along a motion trajectory can be induced only by motion mechanisms having large receptive fields, without simple integration within direction-selective neurons with small receptive fields, like those in V1. The stimulus consisted of discs with long-range apparent motion along a circular trajectory. The stimulus onset asynchrony (SOA) between disc presentations controlled the strength of the apparent motion perception. We measured observers’ sensitivity in detecting color modulation on the discs. The results showed that the measured sensitivity was lowest at SOAs corresponding to the strongest motion perception. This can be interpreted as follows: color signals were integrated along an apparent motion path, and this integration reduced chromatic sensitivity by averaging color signals. Another experiment that controlled apparent motion perception in a different way also supported this idea. However, this integration effect seemed to be linked to responses of motion detectors for the apparent motion stimuli, not directly to perceptual motion representation in the visual system. These results suggest that the human visual system handles color information from retinal inputs regarding moving objects based not only on a retinotopic coordinate but also on object-based coordinates, even when the moving object yields only long-range apparent motion.


2016 ◽  
Vol 116 (4) ◽  
pp. 1592-1602 ◽  
Author(s):  
Martin Szinte ◽  
Donatas Jonikaitis ◽  
Martin Rolfs ◽  
Patrick Cavanagh ◽  
Heiner Deubel

Object tracking across eye movements is thought to rely on presaccadic updating of attention between the object's current and its “remapped” location (i.e., the postsaccadic retinotopic location). We report evidence for a bifocal, presaccadic sampling between these two positions. While preparing a saccade, participants viewed four spatially separated random dot kinematograms, one of which was cued by a colored flash. They reported the direction of a coherent motion signal at the cued location while a second signal occurred simultaneously either at the cue's remapped location or at one of several control locations. Motion integration between the signals occurred only when the two motion signals were congruent and were shown at the cue and at its remapped location. This shows that the visual system integrates features between both the current and the future retinotopic locations of an attended object and that such presaccadic sampling is feature specific.


2019 ◽  
Vol 82 (2) ◽  
pp. 533-549 ◽  
Author(s):  
Josephine Reuther ◽  
Ramakrishna Chakravarthi ◽  
Amelia R. Hunt

AbstractFeature integration theory proposes that visual features, such as shape and color, can only be combined into a unified object when spatial attention is directed to their location in retinotopic maps. Eye movements cause dramatic changes on our retinae, and are associated with obligatory shifts in spatial attention. In two experiments, we measured the prevalence of conjunction errors (that is, reporting an object as having an attribute that belonged to another object), for brief stimulus presentation before, during, and after a saccade. Planning and executing a saccade did not itself disrupt feature integration. Motion did disrupt feature integration, leading to an increase in conjunction errors. However, retinal motion of an equal extent but caused by saccadic eye movements is spared this disruption, and showed similar rates of conjunction errors as a condition with static stimuli presented to a static eye. The results suggest that extra-retinal signals are able to compensate for the motion caused by saccadic eye movements, thereby preserving the integrity of objects across saccades and preventing their features from mixing or mis-binding.


2019 ◽  
Vol 121 (5) ◽  
pp. 1787-1797
Author(s):  
David Souto ◽  
Jayesha Chudasama ◽  
Dirk Kerzel ◽  
Alan Johnston

Smooth pursuit eye movements (pursuit) are used to minimize the retinal motion of moving objects. During pursuit, the pattern of motion on the retina carries not only information about the object movement but also reafferent information about the eye movement itself. The latter arises from the retinal flow of the stationary world in the direction opposite to the eye movement. To extract the global direction of motion of the tracked object and stationary world, the visual system needs to integrate ambiguous local motion measurements (i.e., the aperture problem). Unlike the tracked object, the stationary world’s global motion is entirely determined by the eye movement and thus can be approximately derived from motor commands sent to the eye (i.e., from an efference copy). Because retinal motion opposite to the eye movement is dominant during pursuit, different motion integration mechanisms might be used for retinal motion in the same direction and opposite to pursuit. To investigate motion integration during pursuit, we tested direction discrimination of a brief change in global object motion. The global motion stimulus was a circular array of small static apertures within which one-dimensional gratings moved. We found increased coherence thresholds and a qualitatively different reflexive ocular tracking for global motion opposite to pursuit. Both effects suggest reduced sampling of motion opposite to pursuit, which results in an impaired ability to extract coherence in motion signals in the reafferent direction. We suggest that anisotropic motion integration is an adaptation to asymmetric retinal motion patterns experienced during pursuit eye movements. NEW & NOTEWORTHY This study provides a new understanding of how the visual system achieves coherent perception of an object’s motion while the eyes themselves are moving. The visual system integrates local motion measurements to create a coherent percept of object motion. An analysis of perceptual judgments and reflexive eye movements to a brief change in an object’s global motion confirms that the visual and oculomotor systems pick fewer samples to extract global motion opposite to the eye movement.


2018 ◽  
Vol 115 (9) ◽  
pp. 2240-2245 ◽  
Author(s):  
Alexander Goettker ◽  
Doris I. Braun ◽  
Alexander C. Schütz ◽  
Karl R. Gegenfurtner

Due to the foveal organization of our visual system we have to constantly move our eyes to gain precise information about our environment. Doing so massively alters the retinal input. This is problematic for the perception of moving objects, because physical motion and retinal motion become decoupled and the brain has to discount the eye movements to recover the speed of moving objects. Two different types of eye movements, pursuit and saccades, are combined for tracking. We investigated how the way we track moving targets can affect the perceived target speed. We found that the execution of corrective saccades during pursuit initiation modifies how fast the target is perceived compared with pure pursuit. When participants executed a forward (catch-up) saccade they perceived the target to be moving faster. When they executed a backward saccade they perceived the target to be moving more slowly. Variations in pursuit velocity without corrective saccades did not affect perceptual judgments. We present a model for these effects, assuming that the eye velocity signal for small corrective saccades gets integrated with the retinal velocity signal during pursuit. In our model, the execution of corrective saccades modulates the integration of these two signals by giving less weight to the retinal information around the time of corrective saccades.


2005 ◽  
Vol 17 (7) ◽  
pp. 1011-1017 ◽  
Author(s):  
A. Z. Zivotofsky ◽  
M. E. Goldberg ◽  
K. D. Powell

The visual system uses the pattern of motion on the retina to analyze the motion of objects in the world, and the motion of the observer him/herself. Distinguishing between retinal motion evoked by movement of the retina in space and retinal motion evoked by movement of objects in the environment is computationally difficult, and the human visual system frequently misinterprets the meaning of retinal motion. In this study, we demonstrate that the visual system of the Rhesus monkey also misinterprets retinal motion. We show that monkeys erroneously report the trajectories of pursuit targets or their own pursuit eye movements during an epoch of smooth pursuit across an orthogonally moving background. Furthermore, when they make saccades to the spatial location of stimuli that flashed early in an epoch of smooth pursuit or fixation, they make large errors that appear to take into account the erroneous smooth eye movement that they report in the first experiment, and not the eye movement that they actually make.


2009 ◽  
Author(s):  
Khara Croswaite ◽  
Mei-Ching Lien ◽  
Eric Ruthruff ◽  
Min-Ju Liao

2019 ◽  
Vol 31 (1) ◽  
pp. 88-96 ◽  
Author(s):  
Wladimir Kirsch ◽  
Roland Pfister ◽  
Wilfried Kunde

An object appears smaller in the periphery than in the center of the visual field. In two experiments ( N = 24), we demonstrated that visuospatial attention contributes substantially to this perceptual distortion. Participants judged the size of central and peripheral target objects after a transient, exogenous cue directed their attention to either the central or the peripheral location. Peripheral target objects were judged to be smaller following a central cue, whereas this effect disappeared completely when the peripheral target was cued. This outcome suggests that objects appear smaller in the visual periphery not only because of the structural properties of the visual system but also because of a lack of spatial attention.


Author(s):  
Christian Wolf ◽  
Markus Lappe

AbstractHumans and other primates are equipped with a foveated visual system. As a consequence, we reorient our fovea to objects and targets in the visual field that are conspicuous or that we consider relevant or worth looking at. These reorientations are achieved by means of saccadic eye movements. Where we saccade to depends on various low-level factors such as a targets’ luminance but also crucially on high-level factors like the expected reward or a targets’ relevance for perception and subsequent behavior. Here, we review recent findings how the control of saccadic eye movements is influenced by higher-level cognitive processes. We first describe the pathways by which cognitive contributions can influence the neural oculomotor circuit. Second, we summarize what saccade parameters reveal about cognitive mechanisms, particularly saccade latencies, saccade kinematics and changes in saccade gain. Finally, we review findings on what renders a saccade target valuable, as reflected in oculomotor behavior. We emphasize that foveal vision of the target after the saccade can constitute an internal reward for the visual system and that this is reflected in oculomotor dynamics that serve to quickly and accurately provide detailed foveal vision of relevant targets in the visual field.


2014 ◽  
Vol 112 (6) ◽  
pp. 1307-1316 ◽  
Author(s):  
Isabel Dombrowe ◽  
Claus C. Hilgetag

The voluntary, top-down allocation of visual spatial attention has been linked to changes in the alpha-band of the electroencephalogram (EEG) signal measured over occipital and parietal lobes. In the present study, we investigated how occipitoparietal alpha-band activity changes when people allocate their attentional resources in a graded fashion across the visual field. We asked participants to either completely shift their attention into one hemifield, to balance their attention equally across the entire visual field, or to attribute more attention to one-half of the visual field than to the other. As expected, we found that alpha-band amplitudes decreased stronger contralaterally than ipsilaterally to the attended side when attention was shifted completely. Alpha-band amplitudes decreased bilaterally when attention was balanced equally across the visual field. However, when participants allocated more attentional resources to one-half of the visual field, this was not reflected in the alpha-band amplitudes, which just decreased bilaterally. We found that the performance of the participants was more strongly reflected in the coherence between frontal and occipitoparietal brain regions. We conclude that low alpha-band amplitudes seem to be necessary for stimulus detection. Furthermore, complete shifts of attention are directly reflected in the lateralization of alpha-band amplitudes. In the present study, a gradual allocation of visual attention across the visual field was only indirectly reflected in the alpha-band activity over occipital and parietal cortexes.


Sign in / Sign up

Export Citation Format

Share Document