scholarly journals Intrasaccadic motion streaks jump-start gaze correction

2021 ◽  
Vol 7 (30) ◽  
pp. eabf2218 ◽  
Author(s):  
Richard Schweitzer ◽  
Martin Rolfs

Rapid eye movements (saccades) incessantly shift objects across the retina. To establish object correspondence, the visual system is thought to match surface features of objects across saccades. Here, we show that an object’s intrasaccadic retinal trace—a signal previously considered unavailable to visual processing—facilitates this match making. Human observers made saccades to a cued target in a circular stimulus array. Using high-speed visual projection, we swiftly rotated this array during the eyes’ flight, displaying continuous intrasaccadic target motion. Observers’ saccades landed between the target and a distractor, prompting secondary saccades. Independently of the availability of object features, which we controlled tightly, target motion increased the rate and reduced the latency of gaze-correcting saccades to the initial presaccadic target, in particular when the target’s stimulus features incidentally gave rise to efficient motion streaks. These results suggest that intrasaccadic visual information informs the establishment of object correspondence and jump-starts gaze correction.

2020 ◽  
Author(s):  
Richard Schweitzer ◽  
Martin Rolfs

Rapid eye movements (saccades) incessantly shift objects across the retina. To establish object correspondence, the visual system is thought to match surface features of objects across saccades. Here we show that an object’s intra-saccadic retinal trace – a signal previously considered unavailable to visual processing – facilitates this match-making. Human observers made saccades to a cued target in a circular stimulus array. Using high-speed visual projection, we swiftly rotated this array during the eyes’ flight, displaying continuous intra-saccadic target motion. Observers’ saccades landed between the target and a distractor, prompting secondary saccades. Independently of the availability of object features, which we controlled tightly, target motion increased the rate and reduced the latency of gaze-correcting saccades to the initial pre-saccadic target, in particular when the target’s stimulus features incidentally gave rise to efficient motion streaks. These results suggest that intra-saccadic visual information informs the establishment of object correspondence and jump-starts gaze correction.


2020 ◽  
Author(s):  
Han Zhang ◽  
Nicola C Anderson ◽  
Kevin Miller

Recent studies have shown that mind-wandering (MW) is associated with changes in eye movement parameters, but have not explored how MW affects the sequential pattern of eye movements involved in making sense of complex visual information. Eye movements naturally unfold over time and this process may reveal novel information about cognitive processing during MW. The current study used Recurrence Quantification Analysis (Anderson, Bischof, Laidlaw, Risko, & Kingstone, 2013) to describe the pattern of refixations (fixations directed to previously-inspected regions) during MW. Participants completed a real-world scene encoding task and responded to thought probes assessing intentional and unintentional MW. Both types of MW were associated with worse memory of the scenes. Importantly, RQA showed that scanpaths during unintentional MW were more repetitive than during on-task episodes, as indicated by a higher recurrence rate and more stereotypical fixation sequences. This increased repetitiveness suggests an adaptive response to processing failures through re-examining previous locations. Moreover, this increased repetitiveness contributed to fixations focusing on a smaller spatial scale of the stimuli. Finally, we were also able to validate several traditional measures: both intentional and unintentional MW were associated with fewer and longer fixations; Eye-blinking increased numerically during both types of MW but the difference was only significant for unintentional MW. Overall, the results advanced our understanding of how visual processing is affected during MW by highlighting the sequential aspect of eye movements.


Author(s):  
Angie M. Michaiel ◽  
Elliott T.T. Abe ◽  
Cristopher M. Niell

ABSTRACTMany studies of visual processing are conducted in unnatural conditions, such as head- and gaze-fixation. As this radically limits natural exploration of the visual environment, there is much less known about how animals actively use their sensory systems to acquire visual information in natural, goal-directed contexts. Recently, prey capture has emerged as an ethologically relevant behavior that mice perform without training, and that engages vision for accurate orienting and pursuit. However, it is unclear how mice target their gaze during such natural behaviors, particularly since, in contrast to many predatory species, mice have a narrow binocular field and lack foveate vision that would entail fixing their gaze on a specific point in the visual field. Here we measured head and bilateral eye movements in freely moving mice performing prey capture. We find that the majority of eye movements are compensatory for head movements, thereby acting to stabilize the visual scene. During head turns, however, these periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Analysis of eye movements relative to the cricket position shows that the saccades do not preferentially select a specific point in the visual scene. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings help relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


2019 ◽  
Author(s):  
Lina Teichmann ◽  
Genevieve L. Quek ◽  
Amanda K. Robinson ◽  
Tijl Grootswagers ◽  
Thomas A. Carlson ◽  
...  

AbstractThe ability to rapidly and accurately recognise complex objects is a crucial function of the human visual system. To recognise an object, we need to bind incoming visual features such as colour and form together into cohesive neural representations and integrate these with our pre-existing knowledge about the world. For some objects, typical colour is a central feature for recognition; for example, a banana is typically yellow. Here, we applied multivariate pattern analysis on time-resolved neuroimaging (magnetoencephalography) data to examine how object-colour knowledge affects emerging object representations over time. Our results from 20 participants (11 female) show that the typicality of object-colour combinations influences object representations, although not at the initial stages of object and colour processing. We find evidence that colour decoding peaks later for atypical object-colour combinations in comparison to typical object-colour combinations, illustrating the interplay between processing incoming object features and stored object-knowledge. Taken together, these results provide new insights into the integration of incoming visual information with existing conceptual object knowledge.Significance StatementTo recognise objects, we have to be able to bind object features such as colour and shape into one coherent representation and compare it to stored object knowledge. The magnetoencephalography data presented here provide novel insights about the integration of incoming visual information with our knowledge about the world. Using colour as a model to understand the interaction between seeing and knowing, we show that there is a unique pattern of brain activity for congruently coloured objects (e.g., a yellow banana) relative to incongruently coloured objects (e.g., a red banana). This effect of object-colour knowledge only occurs after single object features are processed, demonstrating that conceptual knowledge is accessed relatively late in the visual processing hierarchy.


2021 ◽  
Vol 125 (5) ◽  
pp. 1552-1576
Author(s):  
David Souto ◽  
Dirk Kerzel

People’s eyes are directed at objects of interest with the aim of acquiring visual information. However, processing this information is constrained in capacity, requiring task-driven and salience-driven attentional mechanisms to select few among the many available objects. A wealth of behavioral and neurophysiological evidence has demonstrated that visual selection and the motor selection of saccade targets rely on shared mechanisms. This coupling supports the premotor theory of visual attention put forth more than 30 years ago, postulating visual selection as a necessary stage in motor selection. In this review, we examine to which extent the coupling of visual and motor selection observed with saccades is replicated during ocular tracking. Ocular tracking combines catch-up saccades and smooth pursuit to foveate a moving object. We find evidence that ocular tracking requires visual selection of the speed and direction of the moving target, but the position of the motion signal may not coincide with the position of the pursuit target. Further, visual and motor selection can be spatially decoupled when pursuit is initiated (open-loop pursuit). We propose that a main function of coupled visual and motor selection is to serve the coordination of catch-up saccades and pursuit eye movements. A simple race-to-threshold model is proposed to explain the variable coupling of visual selection during pursuit, catch-up and regular saccades, while generating testable predictions. We discuss pending issues, such as disentangling visual selection from preattentive visual processing and response selection, and the pinpointing of visual selection mechanisms, which have begun to be addressed in the neurophysiological literature.


1992 ◽  
Vol 67 (1) ◽  
pp. 164-179 ◽  
Author(s):  
K. L. Grasse ◽  
S. G. Lisberger

1. We have investigated the mechanism of a directional deficit in vertical pursuit eye movements in a monkey that was unable to match upward eye speed to target speed but that had pursuit within the normal range for downward or horizontal target motion. Except for a difference in the axis of deficient pursuit, the symptoms in this monkey were similar to those seen with lesions in the frontal or parietal lobes of the cerebral cortex in humans or monkeys. Our evaluation of vertical pursuit in this monkey suggests a new interpretation for the role of the frontal and parietal lobes in pursuit. 2. The up/down asymmetry was most pronounced for target motion at speeds greater than or equal to 2 degree/s. For target motion at 15 or 30 degree/s, upward step-ramp target motion evoked a brief upward smooth eye acceleration, followed by tracking that consisted largely of saccades. Downward step-ramp target motion evoked a prolonged smooth eye acceleration, followed by smooth, accurate tracking. 3. Varying the amplitude of the target step revealed that the deficit was similar for targets moving across all locations of the visual field. Eye acceleration in the interval 0-20 ms after the onset of pursuit was independent of initial target position and was symmetrical for upward and downward target motion. Eye acceleration in the interval 60-80 ms after the onset of pursuit showed a large asymmetry. For upward target motion, eye acceleration in this interval was small and did not depend on initial target position. For downward target motion, eye acceleration depended strongly on initial target position and was large when the target started close to the position of fixation. 4. We next attempted to understand the mechanism of the up/down asymmetry by evaluating the monkey's vertical motion processing and vertical eye movements under a variety of tracking conditions. For spot targets, the response to upward image motion was similar to that in normal monkeys if the image motion was presented during downward pursuit. In addition, the monkey with deficient upward pursuit was able to use upward image motion to make accurate saccades to moving targets. We conclude that the visual processing of upward image motion was normal in this monkey and that an asymmetry in visual motion processing could not account for the deficit in his upward pursuit. 5. Upward smooth eye acceleration was normal when the spot target was moved together with a large textured pattern.(ABSTRACT TRUNCATED AT 400 WORDS)


In Сhapter 3 we compare how verbal and non-verbal visual information is processed. The questions we addresses are: How do the readers integrate text-figure information when reading and understanding verbal and non-verbal patterns, namely one and the same text in verbal for- mat and infographics? How the way humans perceive visual information determines the way they express it in natural language? How the verbalization affects the oculomotor behavior in visual processing? Our results support the assumption of the Cognitive Theory of Multimedia Learning that integration of verbal and pictural information with each other (a polycode text) helps the learners to understand and memorize the text and makes the comprehension easier. We demonstrate the advantages and disadvantages of the infographics (graphical visual repre- sentations of complex information) and verbal text. Also we discuss the relationship between visual processing of images and their verbalization. On one hand, the characteristics of eye movements when looking at the image determine its subsequent verbal description: the more fixations are made and the longer the gaze is directed to the certain area of the image, the more words are dedicated to this area in the following description. On the other hand, verbalization of the previously seen image affects the parameters of eye movements when re-viewing the same image, resulting with the appearance of the ambient processing pattern (short fixations and long saccades), while the re-viewing without verbalization results with the focal processing pattern (longer fixations and shorter saccades). The results obtained open up prospects for fur- ther research on visual perception and can also be used for computer vision models.


2015 ◽  
Vol 32 ◽  
Author(s):  
DAVID MELCHER ◽  
MARIA CONCETTA MORRONE

AbstractA basic principle in visual neuroscience is the retinotopic organization of neural receptive fields. Here, we review behavioral, neurophysiological, and neuroimaging evidence for nonretinotopic processing of visual stimuli. A number of behavioral studies have shown perception depending on object or external-space coordinate systems, in addition to retinal coordinates. Both single-cell neurophysiology and neuroimaging have provided evidence for the modulation of neural firing by gaze position and processing of visual information based on craniotopic or spatiotopic coordinates. Transient remapping of the spatial and temporal properties of neurons contingent on saccadic eye movements has been demonstrated in visual cortex, as well as frontal and parietal areas involved in saliency/priority maps, and is a good candidate to mediate some of the spatial invariance demonstrated by perception. Recent studies suggest that spatiotopic selectivity depends on a low spatial resolution system of maps that operates over a longer time frame than retinotopic processing and is strongly modulated by high-level cognitive factors such as attention. The interaction of an initial and rapid retinotopic processing stage, tied to new fixations, and a longer lasting but less precise nonretinotopic level of visual representation could underlie the perception of both a detailed and a stable visual world across saccadic eye movements.


eLife ◽  
2020 ◽  
Vol 9 ◽  
Author(s):  
Angie M Michaiel ◽  
Elliott TT Abe ◽  
Cristopher M Niell

Many studies of visual processing are conducted in constrained conditions such as head- and gaze-fixation, and therefore less is known about how animals actively acquire visual information in natural contexts. To determine how mice target their gaze during natural behavior, we measured head and bilateral eye movements in mice performing prey capture, an ethological behavior that engages vision. We found that the majority of eye movements are compensatory for head movements, thereby serving to stabilize the visual scene. During movement, however, periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Notably, these saccades do not preferentially target the prey location. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Sign in / Sign up

Export Citation Format

Share Document