Are you looking or looking away? Visual exploration and avoidance of disgust- and fear-stimuli: An eye-tracking study.

Emotion ◽  
2021 ◽  
Author(s):  
Jakob Fink-Lamotte ◽  
Frederike Svensson ◽  
Julian Schmitz ◽  
Cornelia Exner
Author(s):  
Federico Cassioli ◽  
Laura Angioletti ◽  
Michela Balconi

AbstractHuman–computer interaction (HCI) is particularly interesting because full-immersive technology may be approached differently by users, depending on the complexity of the interaction, users’ personality traits, and their motivational systems inclination. Therefore, this study investigated the relationship between psychological factors and attention towards specific tech-interactions in a smart home system (SHS). The relation between personal psychological traits and eye-tracking metrics is investigated through self-report measures [locus of control (LoC), user experience (UX), behavioral inhibition system (BIS) and behavioral activation system (BAS)] and a wearable and wireless near-infrared illumination based eye-tracking system applied to an Italian sample (n = 19). Participants were asked to activate and interact with five different tech-interaction areas with different levels of complexity (entrance, kitchen, living room, bathroom, and bedroom) in a smart home system (SHS), while their eye-gaze behavior was recorded. Data showed significant differences between a simpler interaction (entrance) and a more complex one (living room), in terms of number of fixation. Moreover, slower time to first fixation in a multifaceted interaction (bathroom), compared to simpler ones (kitchen and living room) was found. Additionally, in two interaction conditions (living room and bathroom), negative correlations were found between external LoC and fixation count, and between BAS reward responsiveness scores and fixation duration. Findings led to the identification of a two-way process, where both the complexity of the tech-interaction and subjects’ personality traits are important impacting factors on the user’s visual exploration behavior. This research contributes to understand the user responsiveness adding first insights that may help to create more human-centered technology.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


2020 ◽  
pp. 1-10
Author(s):  
Bruno Gepner ◽  
Anaïs Godde ◽  
Aurore Charrier ◽  
Nicolas Carvalho ◽  
Carole Tardif

Abstract Facial movements of others during verbal and social interaction are often too rapid to be faced and/or processed in time by numerous children and adults with autism spectrum disorder (ASD), which could contribute to their face-to-face interaction peculiarities. We wish here to measure the effect of reducing the speed of one's facial dynamics on the visual exploration of the face by children with ASD. Twenty-three children with ASD and 29 typically-developing control children matched for chronological age passively viewed a video of a speaker telling a story at various velocities, i.e., a real-time speed and two slowed-down speeds. The visual scene was divided into four areas of interest (AOI): face, mouth, eyes, and outside the face. With an eye-tracking system, we measured the percentage of total fixation duration per AOI and the number and mean duration of the visual fixations made on each AOI. In children with ASD, the mean duration of visual fixations on the mouth region, which correlated with their verbal level, increased at slowed-down velocity compared with the real-time one, a finding which parallels a result also found in the control children. These findings strengthen the therapeutic potential of slowness for enhancing verbal and language abilities in children with ASD.


Author(s):  
Abner Cardoso da Silva ◽  
Cesar A. Sierra-Franco ◽  
Greis Francy M. Silva-Calpa ◽  
Felipe Carvalho ◽  
Alberto Barbosa Raposo

2020 ◽  
Author(s):  
R. Shayna Rosenbaum ◽  
Julia G. Halilova ◽  
Sabrina Agnihotri ◽  
Maria C. D'Angelo ◽  
Gordon Winocur ◽  
...  

How well do we know our city? It turns out, much more poorly than we might imagine. We used declarative memory and eye-tracking techniques to examine people’s ability to detect modifications of landmarks in Toronto locales with which they have had extensive experience. Participants were poor at identifying which scenes contained altered landmarks, whether the modification was to the landmarks’ relative size, internal features, or surrounding context. To determine whether an indirect measure would prove more sensitive, we tracked eye movements during viewing. Changes in overall visual exploration, but not to specific regions of change, were related to participants’ explicit endorsement of scenes as modified. These results support the contention that very familiar landmarks are strongly integrated within the spatial context in which they were first experienced, so that any changes that are consciously detected are at a global or coarse, but not local or fine-grained, level.


2020 ◽  
pp. 014272372096681
Author(s):  
David López Pérez ◽  
Przemysław Tomalski ◽  
Alicja Radkowska ◽  
Haiko Ballieux ◽  
Derek G. Moore

Efficient visual exploration in infancy is essential for cognitive and language development. It allows infants to participate in social interactions by attending to faces and learning about objects of interest. Visual scanning of scenes depends on a number of factors, and early differences in efficiency are likely contributing to differences in learning and language development during subsequent years. Predicting language development in diverse samples is particularly challenging, as additional multiple sources of variability affect infant performance. In this study, we tested how the complexity of visual scanning in the presence or absence of a face at 6 to 7 months of age is related to language development at 2 years of age in a multiethnic and predominantly bilingual sample from diverse socioeconomic backgrounds. We used Recurrence Quantification Analysis to measure the temporal and spatial distribution of fixations recurring in the same area of a visual scene. We found that in the absence of a face the temporal distribution of re-fixations on selected objects of interest (but not all) significantly predicted both receptive and expressive language scores, explaining 16% to 20% of the variance. Also, lower rate of re-fixations recurring in the presence of a face predicted higher receptive language scores, suggesting larger vocabulary in infants that effectively disengage from faces. Altogether, our results suggest that dynamic measures, which quantify the complexity of visual scanning, can reliably and robustly predict language development in highly diverse samples. They suggest that selective attending to objects predicts language independently of attention to faces. As eye-tracking and language assessments were carried out in early intervention centres, our study demonstrates the utility of mobile eye-tracking setups for early detection of risk in attention and language development.


2006 ◽  
Vol 24 (3) ◽  
pp. 57-66 ◽  
Author(s):  
O. K. Oyekoya ◽  
F. W. M. Stentiford

Sign in / Sign up

Export Citation Format

Share Document