scholarly journals Dynamic Eye Tracking Based Metrics for Infant Gaze Patterns in the Face-Distractor Competition Paradigm

PLoS ONE ◽  
2014 ◽  
Vol 9 (5) ◽  
pp. e97299 ◽  
Author(s):  
Eero Ahtola ◽  
Susanna Stjerna ◽  
Santeri Yrttiaho ◽  
Charles A. Nelson ◽  
Jukka M. Leppänen ◽  
...  
Keyword(s):  
2020 ◽  
Vol 11 (1) ◽  
Author(s):  
Sofie Vettori ◽  
Stephanie Van der Donck ◽  
Jannes Nys ◽  
Pieter Moors ◽  
Tim Van Wesemael ◽  
...  

Abstract Background Scanning faces is important for social interactions. Difficulty with the social use of eye contact constitutes one of the clinical symptoms of autism spectrum disorder (ASD). It has been suggested that individuals with ASD look less at the eyes and more at the mouth than typically developing (TD) individuals, possibly due to gaze aversion or gaze indifference. However, eye-tracking evidence for this hypothesis is mixed. While gaze patterns convey information about overt orienting processes, it is unclear how this is manifested at the neural level and how relative covert attention to the eyes and mouth of faces might be affected in ASD. Methods We used frequency-tagging EEG in combination with eye tracking, while participants watched fast flickering faces for 1-min stimulation sequences. The upper and lower halves of the faces were presented at 6 Hz and 7.5 Hz or vice versa in different stimulation sequences, allowing to objectively disentangle the neural saliency of the eyes versus mouth region of a perceived face. We tested 21 boys with ASD (8–12 years old) and 21 TD control boys, matched for age and IQ. Results Both groups looked longer at the eyes than the mouth, without any group difference in relative fixation duration to these features. TD boys looked significantly more to the nose, while the ASD boys looked more outside the face. EEG neural saliency data partly followed this pattern: neural responses to the upper or lower face half were not different between groups, but in the TD group, neural responses to the lower face halves were larger than responses to the upper part. Face exploration dynamics showed that TD individuals mostly maintained fixations within the same facial region, whereas individuals with ASD switched more often between the face parts. Limitations Replication in large and independent samples may be needed to validate exploratory results. Conclusions Combined eye-tracking and frequency-tagged neural responses show no support for the excess mouth/diminished eye gaze hypothesis in ASD. The more exploratory face scanning style observed in ASD might be related to their increased feature-based face processing style.


Author(s):  
Nikita Gupta ◽  
Hannah White ◽  
Skylar Trott ◽  
Jeffrey H Spiegel

Abstract Background Human interaction begins with the visual evaluation of others, and this often centers on the face. Objective measurement of this evaluation gives clues to social perception. Objectives The objective was to use eye-tracking technology to evaluate if there are scanpath differences when observers view faces of men, women, and transgender women pre- and post-facial feminization surgery (FFS) including when assigning tasks assessing femininity, attractiveness, and likability. Methods Undergraduate psychology students were prospectively recruited as observers at a single institution. Using eye-tracking technology, they were presented frontal images of prototypical male, prototypical female, and pre- and post-FFS face photos in a random order and then with prompting to assess femininity, attractiveness, and likability. Results Twenty-seven observers performed the tasks. Participants focused their attention more on the central triangle of post-operative and prototypical female images and forehead of pre-operative and prototypical male images. Higher femininity ratings were associated with longer proportional fixations to the central triangle and lower proportional fixations to the forehead. Conclusions This preliminary study implies the scanpath for viewing a post-FFS face is closer to that for viewing a prototypical female than a prototypical male based on differences viewing the forehead and brow versus the central triangle.


2020 ◽  
Vol 57 (12) ◽  
pp. 1392-1401
Author(s):  
Mark P. Pressler ◽  
Emily L. Geisler ◽  
Rami R. Hallac ◽  
James R. Seaward ◽  
Alex A. Kane

Introduction and Objectives: Surgical treatment for trigonocephaly aims to eliminate a stigmatizing deformity, yet the severity that captures unwanted attention is unknown. Surgeons intervene at different points of severity, eliciting controversy. This study used eye tracking to investigate when deformity is perceived. Material and Methods: Three-dimensional photogrammetric images of a normal child and a child with trigonocephaly were mathematically deformed, in 10% increments, to create a spectrum of 11 images. These images were shown to participants using an eye tracker. Participants’ gaze patterns were analyzed, and participants were asked if each image looked “normal” or “abnormal.” Results: Sixty-six graduate students were recruited. Average dwell time toward pathologic areas of interest (AOIs) increased proportionally, from 0.77 ± 0.33 seconds at 0% deformity to 1.08 ± 0.75 seconds at 100% deformity ( P < .0001). A majority of participants did not agree an image looked “abnormal” until 90% deformity from any angle. Conclusion: Eye tracking can be used as a proxy for attention threshold toward orbitofrontal deformity. The amount of attention toward orbitofrontal AOIs increased proportionally with severity. Participants did not generally agree there was “abnormality” until deformity was severe. This study supports the assertion that surgical intervention may be best reserved for more severe deformity.


Heart Rhythm ◽  
2021 ◽  
Vol 18 (8) ◽  
pp. S356
Author(s):  
Heather Marie Giacone ◽  
Anne M. Dubin ◽  
Scott Ceresnak ◽  
Henry Chubb ◽  
William Rowland Goodyer ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


PLoS ONE ◽  
2014 ◽  
Vol 9 (4) ◽  
pp. e93914 ◽  
Author(s):  
Jonathon R. Shasteen ◽  
Noah J. Sasson ◽  
Amy E. Pinkham
Keyword(s):  

2021 ◽  
Author(s):  
◽  
Jacobus Frederick Viljoen

Over the last decade, eye-tracking technology has provided researchers with specific tools to study the process of reading (language and music) empirically. Most of these studies have focused on the “Eye-Hand Span” phenomenon (the ability to read ahead of the point of playing). However, little research investigates the cognitive implications of specific aspects of musical notation when performed in real time. This research aimed to observe the fixations patterns of sight-readers in order to investigate the cognitive underpinnings of key and time signatures in music scores. This research project is a quantitative study using a quasi-experimental research design. Tobii eye-tracking equipment and software were used to record the eye movements of 11 expert and 7 amateur keyboard sight-readers. Two key aspects of music notation, key and time signatures, were selected as the main focus of the study. To investigate these aspects, eighteen research participants were provided with seventeen sight-reading examples for one hand (low complexity) and two hands (high complexity) composed specifically by the researcher. Several examples contained one or more unexpected aspects (accidentals or changes of time signature) to test their effect on fixation count and duration. Two variables (fixation count and fixation duration) were utilised to analyze fixation patterns on the selected aspects of the scores. Three main results emerged from the data analysis: 1) Expert sight-readers performed with much greater accuracy than experts in both tests; 2) Expert sight-readers exhibited a higher fixation count on entire scores in complex examples; 3) Both expert and amateur sight-readers fixate more and for longer on certain notational aspects such as key and time signatures than other notational aspects such as deviations or individual notes. This selection of focused attention suggests that both expert and amateur sight-readers cognitively process music scores in a hierarchical order. In conclusion, key and time signatures appear to require more and longer fixations by both groups of readers than other aspects of the score. This supports previous research which suggests that sound musical knowledge may play a positive role in performers’ sight-reading skills, thereby contributing to more successful sight-reading performances.


Perception ◽  
2018 ◽  
Vol 48 (2) ◽  
pp. 162-174 ◽  
Author(s):  
Nicolas Davidenko ◽  
Hema Kopalle ◽  
Bruce Bridgeman

There is a consistent left-gaze bias when observers fixate upright faces, but it is unknown how this bias manifests in rotated faces, where the two eyes appear at different heights on the face. In two eye-tracking experiments, we measured participants’ first and second fixations, while they judged the expressions of upright and rotated faces. We hypothesized that rotated faces might elicit a bias to fixate the upper eye. Our results strongly confirmed this hypothesis, with the upper eye bias completely dominating the left-gaze bias in ±45° faces in Experiment 1, and across a range of face orientations (±11.25°, ±22.5°, ±33.75°, ±45°, and ±90°) in Experiment 2. In addition, rotated faces elicited more overall eye-directed fixations than upright faces. We consider potential mechanisms of the upper eye bias in rotated faces and discuss some implications for research in social cognition.


2020 ◽  
Vol 22 (2) ◽  
pp. 80-85
Author(s):  
Pauline P. Huynh ◽  
Masaru Ishii ◽  
Michelle Juarez ◽  
David Liao ◽  
Halley M. Darrach ◽  
...  
Keyword(s):  

2020 ◽  
pp. 073563312097861
Author(s):  
Marko Pejić ◽  
Goran Savić ◽  
Milan Segedinac

This study proposes a software system for determining gaze patterns in on-screen testing. The system applies machine learning techniques to eye-movement data obtained from an eye-tracking device to categorize students according to their gaze behavior pattern while solving an on-screen test. These patterns are determined by converting eye movement coordinates into a sequence of regions of interest. The proposed software system extracts features from the sequence and performs clustering that groups students by their gaze pattern. To determine gaze patterns, the system contains components for communicating with an eye-tracking device, collecting and preprocessing students’ gaze data, and visualizing data using different presentation methods. This study presents a methodology to determine gaze patterns and the implementation details of the proposed software. The research was evaluated by determining the gaze patterns of 51 undergraduate students who took a general knowledge test containing 20 questions. This study aims to provide a software infrastructure that can use students’ gaze patterns as an additional indicator of their reading behaviors and their processing attention or difficulty, among other factors.


Sign in / Sign up

Export Citation Format

Share Document