scholarly journals Efficiency of scanning and attention to faces in infancy independently predict language development in a multiethnic and bilingual sample of 2-year-olds

2020 ◽  
pp. 014272372096681
Author(s):  
David López Pérez ◽  
Przemysław Tomalski ◽  
Alicja Radkowska ◽  
Haiko Ballieux ◽  
Derek G. Moore

Efficient visual exploration in infancy is essential for cognitive and language development. It allows infants to participate in social interactions by attending to faces and learning about objects of interest. Visual scanning of scenes depends on a number of factors, and early differences in efficiency are likely contributing to differences in learning and language development during subsequent years. Predicting language development in diverse samples is particularly challenging, as additional multiple sources of variability affect infant performance. In this study, we tested how the complexity of visual scanning in the presence or absence of a face at 6 to 7 months of age is related to language development at 2 years of age in a multiethnic and predominantly bilingual sample from diverse socioeconomic backgrounds. We used Recurrence Quantification Analysis to measure the temporal and spatial distribution of fixations recurring in the same area of a visual scene. We found that in the absence of a face the temporal distribution of re-fixations on selected objects of interest (but not all) significantly predicted both receptive and expressive language scores, explaining 16% to 20% of the variance. Also, lower rate of re-fixations recurring in the presence of a face predicted higher receptive language scores, suggesting larger vocabulary in infants that effectively disengage from faces. Altogether, our results suggest that dynamic measures, which quantify the complexity of visual scanning, can reliably and robustly predict language development in highly diverse samples. They suggest that selective attending to objects predicts language independently of attention to faces. As eye-tracking and language assessments were carried out in early intervention centres, our study demonstrates the utility of mobile eye-tracking setups for early detection of risk in attention and language development.

1989 ◽  
Vol 54 (1) ◽  
pp. 101-105 ◽  
Author(s):  
J. Bruce Tomblin ◽  
Cynthia M. Shonrock ◽  
James C. Hardy

The extent to which the Minnesota Child Development Inventory (MCDI), could be used to estimate levels of language development in 2-year-old children was examined. Fifty-seven children between 23 and 28 months were given the Sequenced Inventory of Communication Development (SICD), and at the same time a parent completed the MCDI. In addition the mean length of utterance (MLU) was obtained for each child from a spontaneous speech sample. The MCDI Expressive Language scale was found to be a strong predictor of both the SICD Expressive scale and MLU. The MCDI Comprehension-Conceptual scale, presumably a receptive language measure, was moderately correlated with the SICD Receptive scale; however, it was also strongly correlated with the expressive measures. These results demonstrated that the Expressive Language scale of the MCDI was a valid predictor of expressive language for 2-year-old children. The MCDI Comprehension-Conceptual scale appeared to assess both receptive and expressive language, thus complicating its interpretation.


Author(s):  
Federico Cassioli ◽  
Laura Angioletti ◽  
Michela Balconi

AbstractHuman–computer interaction (HCI) is particularly interesting because full-immersive technology may be approached differently by users, depending on the complexity of the interaction, users’ personality traits, and their motivational systems inclination. Therefore, this study investigated the relationship between psychological factors and attention towards specific tech-interactions in a smart home system (SHS). The relation between personal psychological traits and eye-tracking metrics is investigated through self-report measures [locus of control (LoC), user experience (UX), behavioral inhibition system (BIS) and behavioral activation system (BAS)] and a wearable and wireless near-infrared illumination based eye-tracking system applied to an Italian sample (n = 19). Participants were asked to activate and interact with five different tech-interaction areas with different levels of complexity (entrance, kitchen, living room, bathroom, and bedroom) in a smart home system (SHS), while their eye-gaze behavior was recorded. Data showed significant differences between a simpler interaction (entrance) and a more complex one (living room), in terms of number of fixation. Moreover, slower time to first fixation in a multifaceted interaction (bathroom), compared to simpler ones (kitchen and living room) was found. Additionally, in two interaction conditions (living room and bathroom), negative correlations were found between external LoC and fixation count, and between BAS reward responsiveness scores and fixation duration. Findings led to the identification of a two-way process, where both the complexity of the tech-interaction and subjects’ personality traits are important impacting factors on the user’s visual exploration behavior. This research contributes to understand the user responsiveness adding first insights that may help to create more human-centered technology.


PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245777
Author(s):  
Fanny Poncet ◽  
Robert Soussignan ◽  
Margaux Jaffiol ◽  
Baptiste Gaudelus ◽  
Arnaud Leleu ◽  
...  

Recognizing facial expressions of emotions is a fundamental ability for adaptation to the social environment. To date, it remains unclear whether the spatial distribution of eye movements predicts accurate recognition or, on the contrary, confusion in the recognition of facial emotions. In the present study, we asked participants to recognize facial emotions while monitoring their gaze behavior using eye-tracking technology. In Experiment 1a, 40 participants (20 women) performed a classic facial emotion recognition task with a 5-choice procedure (anger, disgust, fear, happiness, sadness). In Experiment 1b, a second group of 40 participants (20 women) was exposed to the same materials and procedure except that they were instructed to say whether (i.e., Yes/No response) the face expressed a specific emotion (e.g., anger), with the five emotion categories tested in distinct blocks. In Experiment 2, two groups of 32 participants performed the same task as in Experiment 1a while exposed to partial facial expressions composed of actions units (AUs) present or absent in some parts of the face (top, middle, or bottom). The coding of the AUs produced by the models showed complex facial configurations for most emotional expressions, with several AUs in common. Eye-tracking data indicated that relevant facial actions were actively gazed at by the decoders during both accurate recognition and errors. False recognition was mainly associated with the additional visual exploration of less relevant facial actions in regions containing ambiguous AUs or AUs relevant to other emotional expressions. Finally, the recognition of facial emotions from partial expressions showed that no single facial actions were necessary to effectively communicate an emotional state. In contrast, the recognition of facial emotions relied on the integration of a complex set of facial cues.


2016 ◽  
Vol 44 (2) ◽  
pp. 329-345 ◽  
Author(s):  
WILLEM M. MAK ◽  
ELENA TRIBUSHININA ◽  
JULIA LOMAKO ◽  
NATALIA GAGARINA ◽  
EKATERINA ABROSOVA ◽  
...  

AbstractProduction studies show that both Russian-speaking children with specific language impairment (SLI) and bilingual children for whom Russian is a non-dominant language have difficulty distinguishing between the near-synonymous connectivesi‘and’ anda‘and/but’.Iis a preferred connective when reference is maintained, whereasais normally used for reference shift. We report an eye-tracking experiment comparing connective processing by Russian-speaking monolinguals with typical language development (TLD) with that of Russian–Dutch bilinguals and Russian-speaking monolinguals with SLI (age 5–6). The results demonstrate that the processing profiles of monolinguals with TLD and bilinguals are similar: both groups use connective semantics immediately for predicting further discourse. In contrast, children with SLI do not show sensitivity to these semantic differences. Despite similar production profiles, bilinguals and monolinguals with SLI are clearly different in connective processing. We discuss the implications of these results for the possible causes of the errors in the two populations.


2020 ◽  
pp. 1-10
Author(s):  
Bruno Gepner ◽  
Anaïs Godde ◽  
Aurore Charrier ◽  
Nicolas Carvalho ◽  
Carole Tardif

Abstract Facial movements of others during verbal and social interaction are often too rapid to be faced and/or processed in time by numerous children and adults with autism spectrum disorder (ASD), which could contribute to their face-to-face interaction peculiarities. We wish here to measure the effect of reducing the speed of one's facial dynamics on the visual exploration of the face by children with ASD. Twenty-three children with ASD and 29 typically-developing control children matched for chronological age passively viewed a video of a speaker telling a story at various velocities, i.e., a real-time speed and two slowed-down speeds. The visual scene was divided into four areas of interest (AOI): face, mouth, eyes, and outside the face. With an eye-tracking system, we measured the percentage of total fixation duration per AOI and the number and mean duration of the visual fixations made on each AOI. In children with ASD, the mean duration of visual fixations on the mouth region, which correlated with their verbal level, increased at slowed-down velocity compared with the real-time one, a finding which parallels a result also found in the control children. These findings strengthen the therapeutic potential of slowness for enhancing verbal and language abilities in children with ASD.


Author(s):  
Abner Cardoso da Silva ◽  
Cesar A. Sierra-Franco ◽  
Greis Francy M. Silva-Calpa ◽  
Felipe Carvalho ◽  
Alberto Barbosa Raposo

Sign in / Sign up

Export Citation Format

Share Document