Do you see what i see? Differences in eye movements and gaze behavior in conservatives versus liberals

Author(s):  
Michael D. Dodd ◽  
John R. Hibbing ◽  
Kevin B. Smith
Keyword(s):  
Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


2021 ◽  
Vol 4 (1) ◽  
pp. 71-95
Author(s):  
Juha Lång ◽  
Hana Vrzakova ◽  
Lauri Mehtätalo

  One of the main rules of subtitling states that subtitles should be formatted and timed so that viewers have enough time to read and understand the text but also to follow the picture. In this paper we examine the factors that influence the time viewers spend looking at subtitles. We concentrate on the lexical and structural properties of subtitles. The participant group (N = 14) watched a television documentary with Russian narration and Finnish subtitles (the participants’ native language), while their eye movements were tracked. Using a linear mixed-effects model, we identified significant effects of subtitle duration and character count on the time participants spent looking at the subtitles. The model also revealed significant inter-individual differences, despite the fact that the participant group was seemingly homogeneous. The findings underline the complexity of subtitled audiovisual material as a stimulus of cognitive processing. We provide a starting point for more comprehensive modelling of the factors involved in gaze behaviour when watching subtitled content. Lay summary Subtitles have become a popular method for watching foreign series and films even in countries that have traditionally used dubbing in this regard. Because subtitles are visible to the viewer a short, limited time, they should be composed so that they are easy to read, and that the viewer has time to also follow the image. Nevertheless, the factors that have impact the time it takes to read a subtitle is not very well known. We wanted to find out what makes people who are watching subtitled television shows spend more time gazing at the subtitles? To answer this question, we recorded the eye movements of 14 participants when they were watching a short, subtitled television documentary. We created a statistical model of the gaze behavior from the eye movement data and found that both the length of the subtitle and the time the subtitle is visible are separate contributing factors. We also discovered large differences between individual viewers. Our conclusion is that people process subtitled content in very different ways, but there are some common tendencies. Our model can be seen as solid starting point for comprehensive modelling of gaze behavior of people watching subtitled audiovisual material.


2018 ◽  
Vol 120 (4) ◽  
pp. 1602-1615 ◽  
Author(s):  
Anouk J. de Brouwer ◽  
Mohammed Albaghdadi ◽  
J. Randall Flanagan ◽  
Jason P. Gallivan

Successful motor performance relies on our ability to adapt to changes in the environment by learning novel mappings between motor commands and sensory outcomes. Such adaptation is thought to involve two distinct mechanisms: an implicit, error-based component linked to slow learning and an explicit, strategic component linked to fast learning and savings (i.e., faster relearning). Because behavior, at any given moment, is the resultant combination of these two processes, it has remained a challenge to parcellate their relative contributions to performance. The explicit component to visuomotor rotation (VMR) learning has recently been measured by having participants verbally report their aiming strategy used to counteract the rotation. However, this procedure has been shown to magnify the explicit component. Here we tested whether task-specific eye movements, a natural component of reach planning, but poorly studied in motor learning tasks, can provide a direct readout of the state of the explicit component during VMR learning. We show, by placing targets on a visible ring and including a delay between target presentation and reach onset, that individual differences in gaze patterns during sensorimotor learning are linked to participants’ rates of learning and their expression of savings. Specifically, we find that participants who, during reach planning, naturally fixate an aimpoint rotated away from the target location, show faster initial adaptation and readaptation 24 h later. Our results demonstrate that gaze behavior cannot only uniquely identify individuals who implement cognitive strategies during learning but also how their implementation is linked to differences in learning. NEW & NOTEWORTHY Although it is increasingly well appreciated that sensorimotor learning is driven by two separate components, an error-based process and a strategic process, it has remained a challenge to identify their relative contributions to performance. Here we demonstrate that task-specific eye movements provide a direct read-out of explicit strategies during sensorimotor learning in the presence of visual landmarks. We further show that individual differences in gaze behavior are linked to learning rate and savings.


Author(s):  
Jennifer Smith ◽  
Geoff Long ◽  
Peter Dawes ◽  
Oliver Runswick ◽  
Michael Tipton

Surveillance is key to the lifesaving capability of lifeguards. Experienced personnel consistently display enhanced hazard detection capabilities compared to less experienced counterparts. However, the mechanisms which underpin this effect and the time it takes to develop these skills are not understood. We hypothesized that, after one season of experience, the number of hazards detected by, and eye movements of, less experienced lifeguards (LEL) would more closely approximate experienced lifeguards (EL). The LEL watched ‘beach scene’ videos at the beginning and end of their first season. The number of hazards detected and eye-movement data were collected and compared to the EL group. The LEL perceived fewer hazards than EL and did not increase over the season. There was no difference in eye-movements between groups. Findings suggest one season is not enough for lifeguards to develop enhanced hazard detection skills and skill level differences are not underpinned by differences in gaze behavior.


2016 ◽  
Vol 28 (10) ◽  
pp. 1613-1624 ◽  
Author(s):  
Basil C. Preisig ◽  
Noëmi Eggenberger ◽  
Giuseppe Zito ◽  
Tim Vanbellingen ◽  
Rahel Schumacher ◽  
...  

The human turn-taking system regulates the smooth and precise exchange of speaking turns during face-to-face interaction. Recent studies investigated the processing of ongoing turns during conversation by measuring the eye movements of noninvolved observers. The findings suggest that humans shift their gaze in anticipation to the next speaker before the start of the next turn. Moreover, there is evidence that the ability to timely detect turn transitions mainly relies on the lexico-syntactic content provided by the conversation. Consequently, patients with aphasia, who often experience deficits in both semantic and syntactic processing, might encounter difficulties to detect and timely shift their gaze at turn transitions. To test this assumption, we presented video vignettes of natural conversations to aphasic patients and healthy controls, while their eye movements were measured. The frequency and latency of event-related gaze shifts, with respect to the end of the current turn in the videos, were compared between the two groups. Our results suggest that, compared with healthy controls, aphasic patients have a reduced probability to shift their gaze at turn transitions but do not show significantly increased gaze shift latencies. In healthy controls, but not in aphasic patients, the probability to shift the gaze at turn transition was increased when the video content of the current turn had a higher lexico-syntactic complexity. Furthermore, the results from voxel-based lesion symptom mapping indicate that the association between lexico-syntactic complexity and gaze shift latency in aphasic patients is predicted by brain lesions located in the posterior branch of the left arcuate fasciculus. Higher lexico-syntactic processing demands seem to lead to a reduced gaze shift probability in aphasic patients. This finding may represent missed opportunities for patients to place their contributions during everyday conversation.


2021 ◽  
Author(s):  
Anna Izmalkova ◽  
Anastasia Rzheshevskaya

The study explores the effects of graphological and semantic foregrounding on speech and gaze behavior in textual information construal of subjects with higher and lower impulsivity. Eye movements of sixteen participants were recorded as they read drama texts with interdiscourse switching (semantic foregrounding), with features of typeface distinct from the surrounding text (graphological foregrounding). Discourse modification patterns were analyzed and processed in several steps: specification of participant/object/action/event/perspective modification, parametric annotation of participants’ discourse responses, contrastive analysis of modification parameter activity and parameter synchronized activity. Significant distinctions were found in eye movement parameters (gaze count and initial fixation duration) in subjects with higher and lower impulsivity when reading parts of text with graphical foregrounding. Impulsive subjects tended to visit the areas more often with longer initial fixations than reflective subjects, which is explained in terms of stimulus-driven attention, associated with bottom-up processes. However, these differences in gaze behavior did not result in pronounced distinctions in discourse responses, which were only slightly mediated by impulsivity/reflectivity.


2006 ◽  
Vol 96 (3) ◽  
pp. 1358-1369 ◽  
Author(s):  
Gerben Rotman ◽  
Nikolaus F. Troje ◽  
Roland S. Johansson ◽  
J. Randall Flanagan

We previously showed that, when observers watch an actor performing a predictable block-stacking task, the coordination between the observer's gaze and the actor's hand is similar to the coordination between the actor's gaze and hand. Both the observer and the actor direct gaze to forthcoming grasp and block landing sites and shift their gaze to the next grasp or landing site at around the time the hand contacts the block or the block contacts the landing site. Here we compare observers' gaze behavior in a block manipulation task when the observers did and when they did not know, in advance, which of two blocks the actor would pick up first. In both cases, observers managed to fixate the target ahead of the actor's hand and showed proactive gaze behavior. However, these target fixations occurred later, relative to the actor's movement, when observers did not know the target block in advance. In perceptual tests, in which observers watched animations of the actor reaching partway to the target and had to guess which block was the target, we found that the time at which observers were able to correctly do so was very similar to the time at which they would make saccades to the target block. Overall, our results indicate that observers use gaze in a fashion that is appropriate for hand movement planning and control. This in turn suggests that they implement representations of the manual actions required in the task and representations that direct task-specific eye movements.


Author(s):  
Inssaf El Guabassi ◽  
Zakaria Bousalem ◽  
Mohammed Al Achhab ◽  
Ismail Jellouli ◽  
Badr Eddine EL Mohajir

Learner learning style represents a key principle and core value of the adaptive learning systems (ALS). Moreover, understanding individual learner learning styles is a very good condition for having the best services of resource adaptation. However, the majority of the ALS, which consider learning styles, use questionnaires in order to detect it, whereas this method has a various disadvantages, For example, it is unsuitable for some kinds of respondents, time-consuming to complete, it may be misunderstood by respondent, etc. In the present paper, we propose an approach for automatically detecting learning styles in ALS based on eye tracking technology, because it represents one of the most informative characteristics of gaze behavior. The experimental results showed a high relationship among the Felder-Silverman Learning Style and the eye movements recorded whilst learning.


2016 ◽  
Vol 113 (29) ◽  
pp. 8332-8337 ◽  
Author(s):  
David Hoppe ◽  
Constantin A. Rothkopf

During active behavior humans redirect their gaze several times every second within the visual environment. Where we look within static images is highly efficient, as quantified by computational models of human gaze shifts in visual search and face recognition tasks. However, when we shift gaze is mostly unknown despite its fundamental importance for survival in a dynamic world. It has been suggested that during naturalistic visuomotor behavior gaze deployment is coordinated with task-relevant events, often predictive of future events, and studies in sportsmen suggest that timing of eye movements is learned. Here we establish that humans efficiently learn to adjust the timing of eye movements in response to environmental regularities when monitoring locations in the visual scene to detect probabilistically occurring events. To detect the events humans adopt strategies that can be understood through a computational model that includes perceptual and acting uncertainties, a minimal processing time, and, crucially, the intrinsic costs of gaze behavior. Thus, subjects traded off event detection rate with behavioral costs of carrying out eye movements. Remarkably, based on this rational bounded actor model the time course of learning the gaze strategies is fully explained by an optimal Bayesian learner with humans’ characteristic uncertainty in time estimation, the well-known scalar law of biological timing. Taken together, these findings establish that the human visual system is highly efficient in learning temporal regularities in the environment and that it can use these regularities to control the timing of eye movements to detect behaviorally relevant events.


Sign in / Sign up

Export Citation Format

Share Document