gaze fixations
Recently Published Documents


TOTAL DOCUMENTS

38
(FIVE YEARS 19)

H-INDEX

7
(FIVE YEARS 1)

Author(s):  
Vitaliy Babenko ◽  
Denis Yavna ◽  
Elena Vorobeva ◽  
Ekaterina Denisova ◽  
Pavel Ermakov ◽  
...  

The aim of our study was to analyze gaze fixations in recognizing facial emotional expressions in comparison with o the spatial distribution of the areas with the greatest increase in the total (nonlocal) luminance contrast. It is hypothesized that the most informative areas of the image that getting more of the observer’s attention are the areas with the greatest increase in nonlocal contrast. The study involved 100 university students aged 19-21 with normal vision. 490 full-face photo images were used as stimuli. The images displayed faces of 6 basic emotions (Ekman’s Big Six) as well as neutral (emotionless) expressions. Observer’s eye movements were recorded while they were the recognizing expressions of the shown faces. Then, using a developed software, the areas with the highest (max), lowest (min), and intermediate (med) increases in the total contrast in comparison with the surroundings were identified in the stimulus images at different spatial frequencies. Comparative analysis of the gaze maps with the maps of the areas with min, med, and max increases in the total contrast showed that the gaze fixations in facial emotion classification tasks significantly coincide with the areas characterized by the greatest increase in nonlocal contrast. Obtained results indicate that facial image areas with the greatest increase in the total contrast, which preattentively detected by second-order visual mechanisms, can be the prime targets of the attention.


2021 ◽  
Vol 5 (12) ◽  
pp. 77
Author(s):  
Mirjam de Haas ◽  
Paul Vogt ◽  
Emiel Krahmer

In this paper, we examine to what degree children of 3–4 years old engage with a task and with a social robot during a second-language tutoring lesson. We specifically investigated whether children’s task engagement and robot engagement were influenced by three different feedback types by the robot: adult-like feedback, peer-like feedback and no feedback. Additionally, we investigated the relation between children’s eye gaze fixations and their task engagement and robot engagement. Fifty-eight Dutch children participated in an English counting task with a social robot and physical blocks. We found that, overall, children in the three conditions showed similar task engagement and robot engagement; however, within each condition, they showed large individual differences. Additionally, regression analyses revealed that there is a relation between children’s eye-gaze direction and engagement. Our findings showed that although eye gaze plays a significant role in measuring engagement and can be used to model children’s task engagement and robot engagement, it does not account for the full concept and engagement still comprises more than just eye gaze.


2021 ◽  
Author(s):  
Daniel K Bjornn ◽  
Julie Van ◽  
Brock Kirwan

Pattern separation and pattern completion are generally studied in humans using mnemonic discrimination tasks such as the Mnemonic Similarity Task (MST) where participants identify similar lures and repeated items from a series of images. Failures to correctly discriminate lures are thought to reflect a failure of pattern separation and a propensity toward pattern completion. Recent research has challenged this perspective, suggesting that poor encoding rather than pattern completion accounts for the occurrence of false alarm responses to similar lures. In two experiments, participants completed a continuous recognition task version of the MST while eye movement (Experiment 1 and 2) and fMRI data (Experiment 2) were collected. While we replicated the result that fixation counts at study predicted accuracy on lure trials, we found that target-lure similarity was a much stronger predictor of accuracy on lure trials across both experiments. Lastly, we found that fMRI activation changes in the hippocampus were significantly correlated with the number of fixations at study for correct but not incorrect mnemonic discrimination judgments when controlling for target-lure similarity. Our findings indicate that while eye movements during encoding predict subsequent hippocampal activation changes, mnemonic discrimination performance is better described by pattern separation and pattern completion processes that are influenced by target-lure similarity than simply poor encoding.


Author(s):  
Shiyan Yang ◽  
Brook Shiferaw ◽  
Trey Roady ◽  
Jonny Kuo ◽  
Michael G. Lenné

Head pose has been proposed as a surrogate for eye movement to predict areas of interest (AOIs) where drivers allocate their attention. However, head pose may disassociate with AOIs in glance behavior involving zero or subtle head movements, commonly known as “lizard” glance pattern. In contrast, “owl” glance pattern is used to describe glance behavior along with larger head movements. It remains unclear which glance pattern is prevalent during driver cell phone distraction and what are appropriate metrics to detect such distraction. To address this gap, we analyzed the gaze direction and head pose of 36 participants who completed an email-sorting task using a cell phone while driving a Tesla on the test track in Autopilot mode. The dispersion-threshold algorithm identified driver gaze fixations and synchronized them with head movements. The results showed that when using a cell phone either near the lap or behind the steering wheel, participants exhibited a dominant lizard-type glance pattern with minimal shift in head position. As a result, head pose alone may not provide sufficient information for cell phone distraction detection, and gaze metrics should be involved in enhancing this application.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jouh Yeong Chew ◽  
Mitsuru Kawamoto ◽  
Takashi Okuma ◽  
Eiichi Yoshida ◽  
Norihiko Kato

AbstractThis study proposes a Human Machine Interface (HMI) system with adaptive visual stimuli to facilitate teleoperation of industrial vehicles such as forklifts. The proposed system estimates the context/work state during teleoperation and presents the optimal visual stimuli on the display of HMI. Such adaptability is supported by behavioral models which are developed from behavioral data of conventional/manned forklift operation. The proposed system consists of two models, i.e., gaze attention and work state transition models which are defined by gaze fixations and operation pattern of operators, respectively. In short, the proposed system estimates and shows the optimal visual stimuli on the display of HMI based on temporal operation pattern. The usability of teleoperation system is evaluated by comparing the perceived workload elicited by different types of HMI. The results suggest the adaptive attention-based HMI system outperforms the non-adaptive HMI, where the perceived workload is consistently lower as responded by different categories of forklift operators.


Author(s):  
Giselle Valério Teixeira da Silva ◽  
Marina Carvalho de Moraes Barros ◽  
Juliana do Carmo Azevedo Soares ◽  
Lucas Pereira Carlini ◽  
Tatiany Marcondes Heiderich ◽  
...  

Objective The study aimed to analyze the gaze fixation of pediatricians during the decision process regarding the presence/absence of pain in pictures of newborn infants. Study Design Experimental study, involving 38 pediatricians (92% females, 34.6 ± 9.0 years, 22 neonatologists) who evaluated 20 pictures (two pictures of each newborn: one at rest and one during a painful procedure), presented in random order for each participant. The Tobii-TX300 equipment tracked eye movements in four areas of interest of each picture (AOI): mouth, eyes, forehead, and nasolabial furrow. Pediatricians evaluated the intensity of pain with a verbal analogue score from 0 to 10 (0 = no pain; 10 = maximum pain). The number of pictures in which pediatricians fixed their gaze, the number of gaze fixations, and the total and average time of gaze fixations were compared among the AOI by analysis of variance (ANOVA). The visual-tracking parameters of the pictures' evaluations were also compared by ANOVA according to the pediatricians' perception of pain presence: moderate/severe (score = 6–10), mild (score = 3–5), and absent (score = 0–2). The association between the total time of gaze fixations in the AOI and pain perception was assessed by logistic regression. Results In the 20 newborn pictures, the mean number of gaze fixations was greater in the mouth, eyes, and forehead than in the nasolabial furrow. Also, the average total time of gaze fixations was greater in the mouth and forehead than in the nasolabial furrow. Controlling for the time of gaze fixation in the AOI, each additional second in the time of gaze fixation in the mouth (odds ratio [OR]: 1.26; 95% confidence interval [CI]: 1.08–1.46) and forehead (OR: 1.16; 95% CI: 1.02–1.33) was associated with an increase in the chance of moderate/severe pain presence in the neonatal facial picture. Conclusion When challenged to say whether pain is present in pictures of newborn infants' faces, pediatricians fix their gaze preferably in the mouth. The longer duration of gaze fixation in the mouth and forehead is associated with an increase perception that moderate/severe pain is present. Key Points


2021 ◽  
Author(s):  
Emily Ruth Weichart ◽  
Matthew Galdo ◽  
Vladimir Sloutsky ◽  
Brandon Turner

Two fundamental difficulties when learning is deciding 1) what information is relevant, and 2) when to use it. To overcome these difficulties, humans continuously make choices about which dimensions of information to selectively attend to, and monitor their relevance to the current goal. Although previous theories have specified how observers learn to attend to relevant dimensions over time, those theories have largely remained silent about how attention should be allocated on a within-trial basis, which dimensions of information should be sampled, and how the temporal ordering of information sampling influences learning. Here, we use the Adaptive Attention Representation Model (AARM) to demonstrate that a common set of mechanisms can be used to specify: 1) how the distribution of attention is updated between trials over the course of learning; and 2) how attention dynamically shifts among dimensions within-trial. We validate or proposed set of mechanisms by comparing AARM’s predictions to observed behavior across five case studies, which collectively encompass different theoretical aspects of selective attention. Importantly, we use both eye-tracking and choice response data to provide a stringent test of how attention and decision processes dynamically interact. Specifically, how does attention to selected stimulus dimensions gives rise to decision dynamics, and in turn, how do decision dynamics influence our continuous choices about which dimensions to attend to via gaze fixations?


2021 ◽  
Author(s):  
Garry Kong ◽  
David Aagten-Murphy ◽  
Jessica MV McMaster ◽  
Paul M Bays

Our knowledge about objects in our environment reflects an integration of current visual input with information from preceding gaze fixations. Such a mechanism may reduce uncertainty, but requires the visual system to determine which information obtained in different fixations should be combined or kept separate. To investigate the basis of this decision, we conducted three experiments. Participants viewed a stimulus in their peripheral vision, then made a saccade that shifted the object into the opposite hemifield. During the saccade, the object underwent changes of varying magnitude in two feature dimensions (Experiment 1: color and location, Experiments 2 and 3: color and orientation). Participants reported whether they detected any change and estimated one of the post-saccadic features. Integration of pre-saccadic with post-saccadic input was observed as a bias in estimates towards the pre-saccadic feature value. In all experiments, pre-saccadic bias weakened as the magnitude of the transsaccadic change in the estimated feature increased. Changes in the other feature, despite having a similar probability of detection, had no effect on integration. Results were quantitatively captured by an observer model where the decision whether to integrate information from sequential fixations is made independently for each feature and coupled to awareness of a feature change.


2021 ◽  
Vol 12 ◽  
Author(s):  
Elif Canseza Kaplan ◽  
Anita E. Wagner ◽  
Paolo Toffanin ◽  
Deniz Başkent

Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception (sentence recall task), which reveals a post-task response, and online measures of real time spoken language processing: gaze-tracking and pupillometry. We used stimuli of comparable complexity across both paradigms and tested the same groups of participants. In the sentence recall task, musicians recalled more words correctly than non-musicians. In the eye-tracking experiment, both groups showed reduced fixations to the target and competitor words’ images as the level of speech maskers increased. The time course of gaze fixations to the competitor did not differ between groups in the speech-in-quiet condition, while the time course dynamics did differ between groups as the two-talker masker was added to the target signal. As the level of two-talker masker increased, musicians showed reduced lexical competition as indicated by the gaze fixations to the competitor. The pupil dilation data showed differences mainly in one target-to-masker ratio. This does not allow to draw conclusions regarding potential differences in the use of cognitive resources between groups. Overall, the eye-tracking measure enabled us to observe that musicians may be using a different strategy than non-musicians to attain spoken word recognition as the noise level increased. However, further investigation with more fine-grained alignment between the processes captured by online and offline measures is necessary to establish whether musicians differ due to better cognitive control or sound processing.


Sign in / Sign up

Export Citation Format

Share Document