scholarly journals Eye Gaze Behavior at Turn Transition: How Aphasic Patients Process Speakers' Turns during Video Observation

2016 ◽  
Vol 28 (10) ◽  
pp. 1613-1624 ◽  
Author(s):  
Basil C. Preisig ◽  
Noëmi Eggenberger ◽  
Giuseppe Zito ◽  
Tim Vanbellingen ◽  
Rahel Schumacher ◽  
...  

The human turn-taking system regulates the smooth and precise exchange of speaking turns during face-to-face interaction. Recent studies investigated the processing of ongoing turns during conversation by measuring the eye movements of noninvolved observers. The findings suggest that humans shift their gaze in anticipation to the next speaker before the start of the next turn. Moreover, there is evidence that the ability to timely detect turn transitions mainly relies on the lexico-syntactic content provided by the conversation. Consequently, patients with aphasia, who often experience deficits in both semantic and syntactic processing, might encounter difficulties to detect and timely shift their gaze at turn transitions. To test this assumption, we presented video vignettes of natural conversations to aphasic patients and healthy controls, while their eye movements were measured. The frequency and latency of event-related gaze shifts, with respect to the end of the current turn in the videos, were compared between the two groups. Our results suggest that, compared with healthy controls, aphasic patients have a reduced probability to shift their gaze at turn transitions but do not show significantly increased gaze shift latencies. In healthy controls, but not in aphasic patients, the probability to shift the gaze at turn transition was increased when the video content of the current turn had a higher lexico-syntactic complexity. Furthermore, the results from voxel-based lesion symptom mapping indicate that the association between lexico-syntactic complexity and gaze shift latency in aphasic patients is predicted by brain lesions located in the posterior branch of the left arcuate fasciculus. Higher lexico-syntactic processing demands seem to lead to a reduced gaze shift probability in aphasic patients. This finding may represent missed opportunities for patients to place their contributions during everyday conversation.

2018 ◽  
Vol 71 (9) ◽  
pp. 1860-1872 ◽  
Author(s):  
Stephen RH Langton ◽  
Alex H McIntyre ◽  
Peter JB Hancock ◽  
Helmut Leder

Research has established that a perceived eye gaze produces a concomitant shift in a viewer’s spatial attention in the direction of that gaze. The two experiments reported here investigate the extent to which the nature of the eye movement made by the gazer contributes to this orienting effect. On each trial in these experiments, participants were asked to make a speeded response to a target that could appear in a location toward which a centrally presented face had just gazed (a cued target) or in a location that was not the recipient of a gaze (an uncued target). The gaze cues consisted of either fast saccadic eye movements or slower smooth pursuit movements. Cued targets were responded to faster than uncued targets, and this gaze-cued orienting effect was found to be equivalent for each type of gaze shift both when the gazes were un-predictive of target location (Experiment 1) and counterpredictive of target location (Experiment 2). The results offer no support for the hypothesis that motion speed modulates gaze-cued orienting. However, they do suggest that motion of the eyes per se, regardless of the type of movement, may be sufficient to trigger an orienting effect.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


Author(s):  
Ding Ding ◽  
Mark A Neerincx ◽  
Willem-Paul Brinkman

Abstract Virtual cognitions (VCs) are a stream of simulated thoughts people hear while emerged in a virtual environment, e.g. by hearing a simulated inner voice presented as a voice over. They can enhance people’s self-efficacy and knowledge about, for example, social interactions as previous studies have shown. Ownership and plausibility of these VCs are regarded as important for their effect, and enhancing both might, therefore, be beneficial. A potential strategy for achieving this is the synchronization of the VCs with people’s eye fixation using eye-tracking technology embedded in a head-mounted display. Hence, this paper tests this idea in the context of a pre-therapy for spider and snake phobia to examine the ability to guide people’s eye fixation. An experiment with 24 participants was conducted using a within-subjects design. Each participant was exposed to two conditions: one where the VCs were adapted to eye gaze of the participant and the other where they were not adapted, i.e. the control condition. The findings of a Bayesian analysis suggest that credibly more ownership was reported and more eye-gaze shift behaviour was observed in the eye-gaze-adapted condition than in the control condition. Compared to the alternative of no or negative mediation, the findings also give some more credibility to the hypothesis that ownership, at least partly, positively mediates the effect eye-gaze-adapted VCs have on eye-gaze shift behaviour. Only weak support was found for plausibility as a mediator. These findings help improve insight into how VCs affect people.


2021 ◽  
Vol 15 (1) ◽  
Author(s):  
Yuko Ishizaki ◽  
Takahiro Higuchi ◽  
Yoshitoki Yanagimoto ◽  
Hodaka Kobayashi ◽  
Atsushi Noritake ◽  
...  

Abstract Background Children with autism spectrum disorder (ASD) may experience difficulty adapting to daily life in a preschool or school settings and are likely to develop psychosomatic symptoms. For a better understanding of the difficulties experienced daily by preschool children and adolescents with ASD, this study investigated differences in eye gaze behavior in the classroom environment between children with ASD and those with typical development (TD). Methods The study evaluated 30 children with ASD and 49 children with TD. Participants were presented with images of a human face and a classroom scene. While they gazed at specific regions of visual stimuli, eye tracking with an iView X system was used to evaluate and compare the duration of gaze time between the two groups. Results Compared with preschool children with TD, preschool children with ASD spent less time gazing at the eyes of the human face and the object at which the teacher pointed in the classroom image. Preschool children with TD who had no classroom experience tended to look at the object the teacher pointed at in the classroom image. Conclusion Children with ASD did not look at the human eyes in the facial image or the object pointed at in the classroom image, which may indicate their inability to analyze situations, understand instruction in a classroom, or act appropriately in a group. This suggests that this gaze behavior of children with ASD causes social maladaptation and psychosomatic symptoms. A therapeutic approach that focuses on joint attention is desirable for improving the ability of children with ASD to adapt to their social environment.


Stroke ◽  
2015 ◽  
Vol 46 (suppl_1) ◽  
Author(s):  
John-Ross Rizzo ◽  
Todd Hudson ◽  
Briana Kowal ◽  
Michal Wiseman ◽  
Preeti Raghavan

Introduction: Visual abnormalities and manual motor control have been studied extensively after stroke, but an understanding of oculomotor control post-stroke has not. Recent studies have revealed that in visually guided reaches arm movements are planned during eye movement execution, which may contribute to increased task complexity. In fact, in healthy controls during visually guided reaches, the onset of eye movement is delayed, its velocity reduced, and endpoint errors are larger relative to isolated eye movements. Our objective in this experiment was to examine the temporal properties of eye movement execution for stroke patients with no diagnosed visual impairment. The goal is to improve understanding of oculomotor control in stroke relative to normal function, and ultimately further understand its coordination with manual motor control during joint eye and hand movements. We hypothesized that stroke patients would show abnormal initiation or onset latency for saccades made in an eye movement task, as compared to healthy controls. Methods: We measured the kinematics of eye movements during point-to-point saccades; there was an initial static, fixation point and the stimulus was a flashed target on a computer monitor. We used a video-based eye tracker for objective recording of the eye at a sampling frequency of 2000 Hz (SR Research, Eyelink). 10 stroke subjects, over 4 months from injury and with no diagnosed visual impairment, and 10 healthy controls completed 432 saccades in a serial fashion. Results: Stroke patients had significantly faster onset latencies as compared to healthy controls during saccades (99.5ms vs. 245.2ms, p=0.00058). Conclusion: A better understanding of the variations in oculomotor control post-stroke, which may go unnoticed during clinical assessment, may improve understanding of how eye control synchronizes with arm or manual motor control. This knowledge could assist in tailoring rehabilitative strategies to amplify motor recovery. For next steps, we will perform objective eye and hand recordings during visually guided reaches post-stroke to better understand the harmonization or lack thereof after neurologic insult.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
William Matchin ◽  
Emily Wood

Abstract Matchin and Hickok (2020) proposed that the left posterior inferior frontal gyrus (PIFG) and the left posterior temporal lobe (PTL) both play a role in syntactic processing, broadly construed, attributing distinct functions to these regions with respect to production and perception. Consistent with this hypothesis, functional dissociations between these regions have been demonstrated with respect to lesion–symptom mapping in aphasia. However, neuroimaging studies of syntactic comprehension typically show similar activations in these regions. In order to identify whether these regions show distinct activation patterns with respect to syntactic perception and production, we performed an fMRI study contrasting the subvocal articulation and perception of structured jabberwocky phrases (syntactic), sequences of real words (lexical), and sequences of pseudowords (phonological). We defined two sets of language-selective regions of interest (ROIs) in individual subjects for the PIFG and the PTL using the contrasts [syntactic > lexical] and [syntactic > phonological]. We found robust significant interactions of comprehension and production between these 2 regions at the syntactic level, for both sets of language-selective ROIs. This suggests a core difference in the function of these regions with respect to production and perception, consistent with the lesion literature.


2021 ◽  
Vol 4 (1) ◽  
pp. 71-95
Author(s):  
Juha Lång ◽  
Hana Vrzakova ◽  
Lauri Mehtätalo

  One of the main rules of subtitling states that subtitles should be formatted and timed so that viewers have enough time to read and understand the text but also to follow the picture. In this paper we examine the factors that influence the time viewers spend looking at subtitles. We concentrate on the lexical and structural properties of subtitles. The participant group (N = 14) watched a television documentary with Russian narration and Finnish subtitles (the participants’ native language), while their eye movements were tracked. Using a linear mixed-effects model, we identified significant effects of subtitle duration and character count on the time participants spent looking at the subtitles. The model also revealed significant inter-individual differences, despite the fact that the participant group was seemingly homogeneous. The findings underline the complexity of subtitled audiovisual material as a stimulus of cognitive processing. We provide a starting point for more comprehensive modelling of the factors involved in gaze behaviour when watching subtitled content. Lay summary Subtitles have become a popular method for watching foreign series and films even in countries that have traditionally used dubbing in this regard. Because subtitles are visible to the viewer a short, limited time, they should be composed so that they are easy to read, and that the viewer has time to also follow the image. Nevertheless, the factors that have impact the time it takes to read a subtitle is not very well known. We wanted to find out what makes people who are watching subtitled television shows spend more time gazing at the subtitles? To answer this question, we recorded the eye movements of 14 participants when they were watching a short, subtitled television documentary. We created a statistical model of the gaze behavior from the eye movement data and found that both the length of the subtitle and the time the subtitle is visible are separate contributing factors. We also discovered large differences between individual viewers. Our conclusion is that people process subtitled content in very different ways, but there are some common tendencies. Our model can be seen as solid starting point for comprehensive modelling of gaze behavior of people watching subtitled audiovisual material.


Author(s):  
Gavindya Jayawardena ◽  
Sampath Jayarathna

Eye-tracking experiments involve areas of interest (AOIs) for the analysis of eye gaze data. While there are tools to delineate AOIs to extract eye movement data, they may require users to manually draw boundaries of AOIs on eye tracking stimuli or use markers to define AOIs. This paper introduces two novel techniques to dynamically filter eye movement data from AOIs for the analysis of eye metrics from multiple levels of granularity. The authors incorporate pre-trained object detectors and object instance segmentation models for offline detection of dynamic AOIs in video streams. This research presents the implementation and evaluation of object detectors and object instance segmentation models to find the best model to be integrated in a real-time eye movement analysis pipeline. The authors filter gaze data that falls within the polygonal boundaries of detected dynamic AOIs and apply object detector to find bounding-boxes in a public dataset. The results indicate that the dynamic AOIs generated by object detectors capture 60% of eye movements & object instance segmentation models capture 30% of eye movements.


2021 ◽  
Author(s):  
Ivy F. Tso ◽  
Cynthia Z Burton ◽  
Carly A Lasagna ◽  
Saige Rutherford ◽  
Beier Yao ◽  
...  

Bipolar disorder (BD) is associated with a range of social cognitive deficits. This study investigated the functioning of the mentalizing brain system in BD probed by an eye gaze perception task during fMRI. Compared with healthy controls (n = 21), BD participants (n = 14) showed reduced preferential activation for self-directed gaze discrimination in the medial prefrontal cortex (mPFC) and temporo-parietal junction (TPJ), which was associated with poorer cognitive and social functioning. Aberrant functions of the mentalizing system should be further investigated as marker of social dysfunction and treatment targets.


Sign in / Sign up

Export Citation Format

Share Document