Communicating attention

1995 ◽  
Vol 3 (2) ◽  
pp. 199-223 ◽  
Author(s):  
Boris M. Velichkovsky

The results of two experiments, in which participants solved constructive tasks of the puzzle type, are reported. The tasks were solved by two partners who shared the same visual environment hut whose knowledge of the situation and ability to change it to reach a solution were different. One of the partners — the "expert" — knew the solution in detail but had no means of acting on this information. The second partner — the "novice " — could act to achieve the goal, but knew very little about the solution. The partners were free to communicate verbally. In one third of the trials of the first experiment, in addition to verbal communication, the eye fixations of the expert were projected onto the working space of the novice. In another condition the expert could use a mouse to show the novice relevant parts of the task configuration. Both methods of facilitating the 'joint attention' state of the partners improved their performance. The nature of the dialogues as well as the parameters of the eye movements changed. In the second experiment the direction of the gaze-position data transfer was reversed, from the novice to the expert. This also led to a significant increase in the efficiency of the distributed problem solving.

2013 ◽  
Vol 37 (2) ◽  
pp. 131-136 ◽  
Author(s):  
Atsushi Senju ◽  
Angélina Vernetti ◽  
Yukiko Kikuchi ◽  
Hironori Akechi ◽  
Toshikazu Hasegawa ◽  
...  

The current study investigated the role of cultural norms on the development of face-scanning. British and Japanese adults’ eye movements were recorded while they observed avatar faces moving their mouth, and then their eyes toward or away from the participants. British participants fixated more on the mouth, which contrasts with Japanese participants fixating mainly on the eyes. Moreover, eye fixations of British participants were less affected by the gaze shift of the avatar than Japanese participants, who shifted their fixation to the corresponding direction of the avatar’s gaze. Results are consistent with the Western cultural norms that value the maintenance of eye contact, and the Eastern cultural norms that require flexible use of eye contact and gaze aversion.


Author(s):  
Myeong-Ho Sohn ◽  
Scott A. Douglass ◽  
Mon-Chu Chen ◽  
John R. Anderson

We have studied the performance of subjects as they acquired skill in the Georgia Tech Aegis Simulation Program (GT-ASP) with a particular focus on their eye movements. Our task analysis showed that the GT-ASP breaks down into the selection of unit tasks and the execution of these unit tasks. We focused on the Identification unit-task. Our results showed that most of the practice benefit in Identification came from increasing efficiency during cognitive process, in which people make inferences and decisions on the basis of the currently available information. We also analyzed eye fixations when people perform this unit-task. Participants showed different fixation patterns, depending on what portion of the unit-task was being executed. Fluency in a dynamic complex problem-solving seems to be achieved by efficiency in cognitive as well as perceptual processes.


Vision ◽  
2021 ◽  
Vol 5 (3) ◽  
pp. 39
Author(s):  
Julie Royo ◽  
Fabrice Arcizet ◽  
Patrick Cavanagh ◽  
Pierre Pouget

We introduce a blind spot method to create image changes contingent on eye movements. One challenge of eye movement research is triggering display changes contingent on gaze. The eye-tracking system must capture the image of the eye, discover and track the pupil and corneal reflections to estimate the gaze position, and then transfer this data to the computer that updates the display. All of these steps introduce delays that are often difficult to predict. To avoid these issues, we describe a simple blind spot method to generate gaze contingent display manipulations without any eye-tracking system and/or display controls.


2021 ◽  
Author(s):  
Ying Wang ◽  
Marc M. van Wanrooij ◽  
Rowena Emaus ◽  
Jorik Nonneke ◽  
Michael X Cohen ◽  
...  

Background Individuals with Parkinson disease can experience freezing of gait: a sudden, brief episode of an inability to move their feet despite the intention to walk . Since turning is the most sensitive condition to provoke freezing-of-gait episodes, and the eyes typically lead turning, we hypothesize that disturbances in saccadic eye movements are related to freezing-of-gait episodes. Objectives This study explores the relationship between freezing-of-gait episodes and saccadic eye movements for gaze shift and gaze stabilization during turning. Methods We analyzed 277 freezing-of-gait episodes provoked among 17 individuals with Parkinson disease during two conditions: self-selected speed and rapid speed 180-degree turns in alternating directions. Eye movements acquired from electrooculography signals were characterized by the average position of gaze, the amplitude of gaze shifts, and the speed of gaze stabilization. We analyzed these variables before and during freezing-of-gait episodes occurring at the different phase angles of a turn. Results Significant off-track changes of the gaze position were observed almost one 180-degree-turn time before freezing-of-gait episodes. In addition, the speed of gaze stabilization significantly decreased during freezing-of-gait episodes. Conclusions We argue that off-track changes of the gaze position could be a predictor of freezing-of-gait episodes due to continued failure in movement-error correction or an insufficient preparation for eye-to-foot coordination during turning. The decline in the speed of gaze stabilization is large during freezing-of-gait episodes given the slowness or stop of body turning. We argue that this could be evidence for a healthy compensatory system in individuals with freezing-of-gait.


Author(s):  
Matthijs L. <!>den Besten ◽  
Max Loubser ◽  
Jean-Michel Dalle

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5178
Author(s):  
Sangbong Yoo ◽  
Seongmin Jeong ◽  
Seokyeon Kim ◽  
Yun Jang

Gaze movement and visual stimuli have been utilized to analyze human visual attention intuitively. Gaze behavior studies mainly show statistical analyses of eye movements and human visual attention. During these analyses, eye movement data and the saliency map are presented to the analysts as separate views or merged views. However, the analysts become frustrated when they need to memorize all of the separate views or when the eye movements obscure the saliency map in the merged views. Therefore, it is not easy to analyze how visual stimuli affect gaze movements since existing techniques focus excessively on the eye movement data. In this paper, we propose a novel visualization technique for analyzing gaze behavior using saliency features as visual clues to express the visual attention of an observer. The visual clues that represent visual attention are analyzed to reveal which saliency features are prominent for the visual stimulus analysis. We visualize the gaze data with the saliency features to interpret the visual attention. We analyze the gaze behavior with the proposed visualization to evaluate that our approach to embedding saliency features within the visualization supports us to understand the visual attention of an observer.


Sign in / Sign up

Export Citation Format

Share Document