scholarly journals Cortical dynamics of saccade-target selection during free-viewing of natural scenes

2016 ◽  
Author(s):  
Linda Henriksson ◽  
Kaisu Ölander ◽  
Riitta Hari

ABSTRACTNatural visual behaviour entails explorative eye movements, saccades, that bring different parts of a visual scene into the central vision. The neural processes guiding the selection of saccade targets are still largely unknown. Therefore, in this study, we tracked with magnetoencephalography (MEG) cortical dynamics of viewers who were freely exploring novel natural scenes. Overall, the viewers were largely consistent in their gaze behaviour, especially if the scene contained any persons. We took a fresh approach to relate the eye-gaze data to the MEG signals by characterizing dynamic cortical representations by means of representational distance matrices. Specifically, we compared the representational distances between the stimuli in the evoked MEG responses with predictions based (1) on the low-level visual similarity of the stimuli (as visually more similar stimuli evoke more similar responses in early visual areas) and (2) on the eye-gaze data. At 50–75 ms after the scene onset, the similarity of the occipital MEG patterns correlated with the low-level visual similarity of the scenes, and already at 75–100 ms the visual features attracting the first saccades predicted the similarity of the right parieto-occipital MEG responses. Thereafter, at 100–125 ms, the landing positions of the upcoming saccades explained MEG responses. These results indicate that MEG signals contain signatures of the rapid processing of natural visual scenes as well as of the initiation of the first saccades, with the processing of the saccade target preceding the processing of the landing position of the upcoming saccade.SIGNIFICANCE STATEMENTHumans naturally make eye movements to bring different parts of a visual scene to the fovea where our visual acuity is the best. Tracking of eye gaze can reveal how we make inferences about the content of a scene by looking at different objects, or which visual cues automatically attract our attention and gaze. The brain dynamics governing natural gaze behaviour is still largely unknown. Here we suggest a novel approach to relate eye-tracking results with brain activity, as measured with magnetoencephalography (MEG), and demonstrate signatures of natural gaze behaviour in the MEG data already before the eye movements occur.

2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Author(s):  
Christian Wolf ◽  
Markus Lappe

AbstractHumans and other primates are equipped with a foveated visual system. As a consequence, we reorient our fovea to objects and targets in the visual field that are conspicuous or that we consider relevant or worth looking at. These reorientations are achieved by means of saccadic eye movements. Where we saccade to depends on various low-level factors such as a targets’ luminance but also crucially on high-level factors like the expected reward or a targets’ relevance for perception and subsequent behavior. Here, we review recent findings how the control of saccadic eye movements is influenced by higher-level cognitive processes. We first describe the pathways by which cognitive contributions can influence the neural oculomotor circuit. Second, we summarize what saccade parameters reveal about cognitive mechanisms, particularly saccade latencies, saccade kinematics and changes in saccade gain. Finally, we review findings on what renders a saccade target valuable, as reflected in oculomotor behavior. We emphasize that foveal vision of the target after the saccade can constitute an internal reward for the visual system and that this is reflected in oculomotor dynamics that serve to quickly and accurately provide detailed foveal vision of relevant targets in the visual field.


Author(s):  
Ding Ding ◽  
Mark A Neerincx ◽  
Willem-Paul Brinkman

Abstract Virtual cognitions (VCs) are a stream of simulated thoughts people hear while emerged in a virtual environment, e.g. by hearing a simulated inner voice presented as a voice over. They can enhance people’s self-efficacy and knowledge about, for example, social interactions as previous studies have shown. Ownership and plausibility of these VCs are regarded as important for their effect, and enhancing both might, therefore, be beneficial. A potential strategy for achieving this is the synchronization of the VCs with people’s eye fixation using eye-tracking technology embedded in a head-mounted display. Hence, this paper tests this idea in the context of a pre-therapy for spider and snake phobia to examine the ability to guide people’s eye fixation. An experiment with 24 participants was conducted using a within-subjects design. Each participant was exposed to two conditions: one where the VCs were adapted to eye gaze of the participant and the other where they were not adapted, i.e. the control condition. The findings of a Bayesian analysis suggest that credibly more ownership was reported and more eye-gaze shift behaviour was observed in the eye-gaze-adapted condition than in the control condition. Compared to the alternative of no or negative mediation, the findings also give some more credibility to the hypothesis that ownership, at least partly, positively mediates the effect eye-gaze-adapted VCs have on eye-gaze shift behaviour. Only weak support was found for plausibility as a mediator. These findings help improve insight into how VCs affect people.


2009 ◽  
Vol 101 (6) ◽  
pp. 2889-2897 ◽  
Author(s):  
Andre Kaminiarz ◽  
Kerstin Königs ◽  
Frank Bremmer

Different types of fast eye movements, including saccades and fast phases of optokinetic nystagmus (OKN) and optokinetic afternystagmus (OKAN), are coded by only partially overlapping neural networks. This is a likely cause for the differences that have been reported for the dynamic parameters of fast eye movements. The dependence of two of these parameters—peak velocity and duration—on saccadic amplitude has been termed “main sequence.” The main sequence of OKAN fast phases has not yet been analyzed. These eye movements are unique in that they are generated by purely subcortical control mechanisms and that they occur in complete darkness. In this study, we recorded fast phases of OKAN and OKN as well as visually guided and spontaneous saccades under identical background conditions because background characteristics have been reported to influence the main sequence of saccades. Our data clearly show that fast phases of OKAN and OKN differ with respect to their main sequence. OKAN fast phases were characterized by their lower peak velocities and longer durations compared with those of OKN fast phases. Furthermore we found that the main sequence of spontaneous saccades depends heavily on background characteristics, with saccades in darkness being slower and lasting longer. On the contrary, the main sequence of visually guided saccades depended on background characteristics only very slightly. This implies that the existence of a visual saccade target largely cancels out the effect of background luminance. Our data underline the critical role of environmental conditions (light vs. darkness), behavioral tasks (e.g., spontaneous vs. visually guided), and the underlying neural networks for the exact spatiotemporal characteristics of fast eye movements.


2018 ◽  
Vol 71 (9) ◽  
pp. 1860-1872 ◽  
Author(s):  
Stephen RH Langton ◽  
Alex H McIntyre ◽  
Peter JB Hancock ◽  
Helmut Leder

Research has established that a perceived eye gaze produces a concomitant shift in a viewer’s spatial attention in the direction of that gaze. The two experiments reported here investigate the extent to which the nature of the eye movement made by the gazer contributes to this orienting effect. On each trial in these experiments, participants were asked to make a speeded response to a target that could appear in a location toward which a centrally presented face had just gazed (a cued target) or in a location that was not the recipient of a gaze (an uncued target). The gaze cues consisted of either fast saccadic eye movements or slower smooth pursuit movements. Cued targets were responded to faster than uncued targets, and this gaze-cued orienting effect was found to be equivalent for each type of gaze shift both when the gazes were un-predictive of target location (Experiment 1) and counterpredictive of target location (Experiment 2). The results offer no support for the hypothesis that motion speed modulates gaze-cued orienting. However, they do suggest that motion of the eyes per se, regardless of the type of movement, may be sufficient to trigger an orienting effect.


2021 ◽  
Vol 4 (1) ◽  
pp. 71-95
Author(s):  
Juha Lång ◽  
Hana Vrzakova ◽  
Lauri Mehtätalo

  One of the main rules of subtitling states that subtitles should be formatted and timed so that viewers have enough time to read and understand the text but also to follow the picture. In this paper we examine the factors that influence the time viewers spend looking at subtitles. We concentrate on the lexical and structural properties of subtitles. The participant group (N = 14) watched a television documentary with Russian narration and Finnish subtitles (the participants’ native language), while their eye movements were tracked. Using a linear mixed-effects model, we identified significant effects of subtitle duration and character count on the time participants spent looking at the subtitles. The model also revealed significant inter-individual differences, despite the fact that the participant group was seemingly homogeneous. The findings underline the complexity of subtitled audiovisual material as a stimulus of cognitive processing. We provide a starting point for more comprehensive modelling of the factors involved in gaze behaviour when watching subtitled content. Lay summary Subtitles have become a popular method for watching foreign series and films even in countries that have traditionally used dubbing in this regard. Because subtitles are visible to the viewer a short, limited time, they should be composed so that they are easy to read, and that the viewer has time to also follow the image. Nevertheless, the factors that have impact the time it takes to read a subtitle is not very well known. We wanted to find out what makes people who are watching subtitled television shows spend more time gazing at the subtitles? To answer this question, we recorded the eye movements of 14 participants when they were watching a short, subtitled television documentary. We created a statistical model of the gaze behavior from the eye movement data and found that both the length of the subtitle and the time the subtitle is visible are separate contributing factors. We also discovered large differences between individual viewers. Our conclusion is that people process subtitled content in very different ways, but there are some common tendencies. Our model can be seen as solid starting point for comprehensive modelling of gaze behavior of people watching subtitled audiovisual material.


2021 ◽  
Author(s):  
Marek A. Pedziwiatr ◽  
Elisabeth von dem Hagen ◽  
Christoph Teufel

Humans constantly move their eyes to explore the environment and obtain information. Competing theories of gaze guidance consider the factors driving eye movements within a dichotomy between low-level visual features and high-level object representations. However, recent developments in object perception indicate a complex and intricate relationship between features and objects. Specifically, image-independent object-knowledge can generate objecthood by dynamically reconfiguring how feature space is carved up by the visual system. Here, we adopt this emerging perspective of object perception, moving away from the simplifying dichotomy between features and objects in explanations of gaze guidance. We recorded eye movements in response to stimuli that appear as meaningless patches on initial viewing but are experienced as coherent objects once relevant object-knowledge has been acquired. We demonstrate that gaze guidance differs substantially depending on whether observers experienced the same stimuli as meaningless patches or organised them into object representations. In particular, fixations on identical images became object-centred, less dispersed, and more consistent across observers once exposed to relevant prior object-knowledge. Observers' gaze behaviour also indicated a shift from exploratory information-sampling to a strategy of extracting information mainly from selected, object-related image areas. These effects were evident from the first fixations on the image. Importantly, however, eye-movements were not fully determined by object representations but were best explained by a simple model that integrates image-computable features and high-level, knowledge-dependent object representations. Overall, the results show how information sampling via eye-movements in humans is guided by a dynamic interaction between image-computable features and knowledge-driven perceptual organisation.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


Author(s):  
Angie M. Michaiel ◽  
Elliott T.T. Abe ◽  
Cristopher M. Niell

ABSTRACTMany studies of visual processing are conducted in unnatural conditions, such as head- and gaze-fixation. As this radically limits natural exploration of the visual environment, there is much less known about how animals actively use their sensory systems to acquire visual information in natural, goal-directed contexts. Recently, prey capture has emerged as an ethologically relevant behavior that mice perform without training, and that engages vision for accurate orienting and pursuit. However, it is unclear how mice target their gaze during such natural behaviors, particularly since, in contrast to many predatory species, mice have a narrow binocular field and lack foveate vision that would entail fixing their gaze on a specific point in the visual field. Here we measured head and bilateral eye movements in freely moving mice performing prey capture. We find that the majority of eye movements are compensatory for head movements, thereby acting to stabilize the visual scene. During head turns, however, these periods of stabilization are interspersed with non-compensatory saccades that abruptly shift gaze position. Analysis of eye movements relative to the cricket position shows that the saccades do not preferentially select a specific point in the visual scene. Rather, orienting movements are driven by the head, with the eyes following in coordination to sequentially stabilize and recenter the gaze. These findings help relate eye movements in the mouse to other species, and provide a foundation for studying active vision during ethological behaviors in the mouse.


Sign in / Sign up

Export Citation Format

Share Document