scholarly journals Look up to the body: An eye-tracking investigation of 7-months-old infants’ visual exploration of emotional body expressions

2020 ◽  
Vol 60 ◽  
pp. 101473
Author(s):  
Elena Geangu ◽  
Quoc C. Vuong
Author(s):  
Federico Cassioli ◽  
Laura Angioletti ◽  
Michela Balconi

AbstractHuman–computer interaction (HCI) is particularly interesting because full-immersive technology may be approached differently by users, depending on the complexity of the interaction, users’ personality traits, and their motivational systems inclination. Therefore, this study investigated the relationship between psychological factors and attention towards specific tech-interactions in a smart home system (SHS). The relation between personal psychological traits and eye-tracking metrics is investigated through self-report measures [locus of control (LoC), user experience (UX), behavioral inhibition system (BIS) and behavioral activation system (BAS)] and a wearable and wireless near-infrared illumination based eye-tracking system applied to an Italian sample (n = 19). Participants were asked to activate and interact with five different tech-interaction areas with different levels of complexity (entrance, kitchen, living room, bathroom, and bedroom) in a smart home system (SHS), while their eye-gaze behavior was recorded. Data showed significant differences between a simpler interaction (entrance) and a more complex one (living room), in terms of number of fixation. Moreover, slower time to first fixation in a multifaceted interaction (bathroom), compared to simpler ones (kitchen and living room) was found. Additionally, in two interaction conditions (living room and bathroom), negative correlations were found between external LoC and fixation count, and between BAS reward responsiveness scores and fixation duration. Findings led to the identification of a two-way process, where both the complexity of the tech-interaction and subjects’ personality traits are important impacting factors on the user’s visual exploration behavior. This research contributes to understand the user responsiveness adding first insights that may help to create more human-centered technology.


Psihologija ◽  
2009 ◽  
Vol 42 (3) ◽  
pp. 307-327 ◽  
Author(s):  
Vanja Kovic ◽  
Kim Plunkett ◽  
Gert Westermann

This study involved presentation of animate objects under labeling and non-labeling conditions and examination of participants' looking pattern across these conditions. Results revealed a surprisingly consistent way in which adults look at the pictures of animate objects. The head/eyes of the animals were a typical region attracting a number of fixations, but also some other parts of animals (e.g. the tail in cats, or the udder in cows and the body in snakes). Furthermore, not only did participants tend to look at similar regions of the pictures of animate objects, but also the looking order to these regions was consistent across participants. However, contrary to the original predictions, these patterns of fixations were similar across the naming and non-naming conditions ('Look at the <target>!', 'Look at the picture!' and 'What's this?', respectively), which led to the conclusion that participants' consistency in processing animate objects was not reflecting underlying mental representation evoked by labels, but was rather driven by the structural similarity of animate objects, in particular the presence of a head.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


2020 ◽  
pp. 1-10
Author(s):  
Bruno Gepner ◽  
Anaïs Godde ◽  
Aurore Charrier ◽  
Nicolas Carvalho ◽  
Carole Tardif

Abstract Facial movements of others during verbal and social interaction are often too rapid to be faced and/or processed in time by numerous children and adults with autism spectrum disorder (ASD), which could contribute to their face-to-face interaction peculiarities. We wish here to measure the effect of reducing the speed of one's facial dynamics on the visual exploration of the face by children with ASD. Twenty-three children with ASD and 29 typically-developing control children matched for chronological age passively viewed a video of a speaker telling a story at various velocities, i.e., a real-time speed and two slowed-down speeds. The visual scene was divided into four areas of interest (AOI): face, mouth, eyes, and outside the face. With an eye-tracking system, we measured the percentage of total fixation duration per AOI and the number and mean duration of the visual fixations made on each AOI. In children with ASD, the mean duration of visual fixations on the mouth region, which correlated with their verbal level, increased at slowed-down velocity compared with the real-time one, a finding which parallels a result also found in the control children. These findings strengthen the therapeutic potential of slowness for enhancing verbal and language abilities in children with ASD.


Author(s):  
Abner Cardoso da Silva ◽  
Cesar A. Sierra-Franco ◽  
Greis Francy M. Silva-Calpa ◽  
Felipe Carvalho ◽  
Alberto Barbosa Raposo

2020 ◽  
Author(s):  
Tamara Jakovljević ◽  
Milica Janković ◽  
Andrej Savić ◽  
Ivan Soldatović ◽  
Petar Todorović ◽  
...  

AbstractThe study investigated the influence of white vs 12 background and overlay colours on the reading process in school age children. Previous research reported that colours could affect reading skills as an important factor of the emotional and physiological state of the body and that reading is one of the most important processes in the maturation of children. The aim of the study was to assess developmental differences between second and third grade students of elementary school and to evaluate differences in electroencephalography (EEG), ocular, electrodermal activities (EDA) and heart rate variability (HRV). In the experiment, the responses of 24 children (12 second and 12 third grade students) to different background and overlay colours were summarized using EEG, eye tracking, EDA and HRV signals. Our findings showed a decreasing trend with age regarding EEG power bands (Alpha, Beta, Delta, Theta) and lower scores of reading duration and eye-tracking measures in younger children compared to older children. As shown in the results, HRV parameters showed higher scores in 12 background and overlay colours among second than third grade students which is linearly correlated to the level of stress and readable from EDA measures as well. The existing study showed the calming effect on second graders in turquoise and blue background colours. Considering other colours separately for each parameter, we assumed that there are no systematic differences in Reading duration, EEG power band, Eye-tracking and EDA measures.


2020 ◽  
Author(s):  
R. Shayna Rosenbaum ◽  
Julia G. Halilova ◽  
Sabrina Agnihotri ◽  
Maria C. D'Angelo ◽  
Gordon Winocur ◽  
...  

How well do we know our city? It turns out, much more poorly than we might imagine. We used declarative memory and eye-tracking techniques to examine people’s ability to detect modifications of landmarks in Toronto locales with which they have had extensive experience. Participants were poor at identifying which scenes contained altered landmarks, whether the modification was to the landmarks’ relative size, internal features, or surrounding context. To determine whether an indirect measure would prove more sensitive, we tracked eye movements during viewing. Changes in overall visual exploration, but not to specific regions of change, were related to participants’ explicit endorsement of scenes as modified. These results support the contention that very familiar landmarks are strongly integrated within the spatial context in which they were first experienced, so that any changes that are consciously detected are at a global or coarse, but not local or fine-grained, level.


Emotion ◽  
2021 ◽  
Author(s):  
Jakob Fink-Lamotte ◽  
Frederike Svensson ◽  
Julian Schmitz ◽  
Cornelia Exner

2020 ◽  
pp. 014272372096681
Author(s):  
David López Pérez ◽  
Przemysław Tomalski ◽  
Alicja Radkowska ◽  
Haiko Ballieux ◽  
Derek G. Moore

Efficient visual exploration in infancy is essential for cognitive and language development. It allows infants to participate in social interactions by attending to faces and learning about objects of interest. Visual scanning of scenes depends on a number of factors, and early differences in efficiency are likely contributing to differences in learning and language development during subsequent years. Predicting language development in diverse samples is particularly challenging, as additional multiple sources of variability affect infant performance. In this study, we tested how the complexity of visual scanning in the presence or absence of a face at 6 to 7 months of age is related to language development at 2 years of age in a multiethnic and predominantly bilingual sample from diverse socioeconomic backgrounds. We used Recurrence Quantification Analysis to measure the temporal and spatial distribution of fixations recurring in the same area of a visual scene. We found that in the absence of a face the temporal distribution of re-fixations on selected objects of interest (but not all) significantly predicted both receptive and expressive language scores, explaining 16% to 20% of the variance. Also, lower rate of re-fixations recurring in the presence of a face predicted higher receptive language scores, suggesting larger vocabulary in infants that effectively disengage from faces. Altogether, our results suggest that dynamic measures, which quantify the complexity of visual scanning, can reliably and robustly predict language development in highly diverse samples. They suggest that selective attending to objects predicts language independently of attention to faces. As eye-tracking and language assessments were carried out in early intervention centres, our study demonstrates the utility of mobile eye-tracking setups for early detection of risk in attention and language development.


Sign in / Sign up

Export Citation Format

Share Document