Proactive or reactive? Neural oscillatory insight into the leader-follower dynamics of early infant-caregiver interaction

2021 ◽  
Author(s):  
Emily Phillips ◽  
Louise Goupil ◽  
Ira Marriott Haresign ◽  
Emma Bruce-Gardyne ◽  
Florian-Andrei Csolsim ◽  
...  

We know that infants’ ability to coordinate attention with others towards the end of the first year is fundamental to language acquisition and social cognition (Carpenter et al., 1998). Yet, we understand little about the neural and cognitive mechanisms driving infant attention in shared interaction: do infants play a proactive role in creating episodes of joint attention? Recording EEG from 12-month-old infants whilst they engaged in table-top play with their caregiver, we examined the ostensive signals and neural activity preceding and following infant- vs. adult-led joint attention. Contrary to traditional theories of socio-communicative development (Tomasello et al., 2007), infant-led joint attention episodes appeared largely reactive: they were not associated with increased theta power, a neural marker of endogenously driven attention, or ostensive signals before the initiation. Infants were, however, sensitive to whether their initiations were responded to. When caregivers joined their attentional focus, infants showed increased alpha suppression, a pattern of neural activity associated with predictive processing. Our results suggest that at 10-12 months, infants are not yet proactive in creating joint attention. They do, however, anticipate behavioural contingency, a potentially foundational mechanism for the emergence of intentional communication (Smith & Breazeal, 2007).

1997 ◽  
Vol 2 (3) ◽  
pp. 216-225 ◽  
Author(s):  
Luigia Camaioni

The emergence of intentional gestural communication around the end of the first year of life is widely recognized as a basic milestone in the infant's communicative development. Two types of comparison are carried out in this paper. The first comparison concerns the gestural communication of human infants and of our nearest primate relatives, the apes, and especially the well-studied chimpanzees. The second comparison considers a special case of gestural communication, namely children with autism, who fail to develop some important forms of communication, language, and social interaction that normal infants develop in the first 2 years of life. In seeking to explain the patterns of similarities and differences derived from these two comparisons, the possible role of several developmental processes will be considered and evaluated: social sensitivity, sensitivity to eye contact and gaze, understanding of agency, and understanding of subjectivity.


2002 ◽  
Vol 29 (1) ◽  
pp. 23-48 ◽  
Author(s):  
MARIA LEGERSTEE ◽  
JEAN VARGHESE ◽  
YOLANDA VAN BEEK

The effects of maternal interactive styles on the production of referential communication were assessed in four groups of infants whose chronological ages ranged between 0;6 and 1;8. Two groups of infants with Down syndrome (DS), one (n = 11) with a mean mental age (MA) of 0;8.6, and the other (n = 11) of 1;4.5, were matched on MA with two groups (n = 10 each) of typically developing infants. Infants were seen bi-monthly, for 8 months, with mothers, same-aged peers, and mothers of the peers. Results showed that High MA non-Down syndrome (ND) infants produced more words, and High MA DS infants produced more gestures when playing with mothers than peers. Mothers exhibited more attentional maintaining behaviours than peers, in particular to High MA infants, but they redirected the attentional focus of Low MA infants more. Sequential loglinear analyses revealed interesting contingencies between the interactive strategies of mothers and the referential communicative behaviours of their infants. Whereas maintaining attention increased, redirecting attention decreased the likelihood of the production of gestures and words in children. However, redirecting attention was followed by maintaining attention. Thus, mothers redirect the attentional focus in order to promote joint attention and referential communication. Furthermore, words and gestures of the children also promote joint attention in mothers. This highlights the reciprocal nature of these dynamic communicative interactions.


2001 ◽  
Vol 25 (2) ◽  
pp. 176-183 ◽  
Author(s):  
Jennifer L. de la Ossa ◽  
Mary Gauvain

This paper reports on the role of joint attentional processes in the development of children’s skill at using pictorial plans to construct objects. Efforts to establish joint attentional focus between mother and child were identified, and the nature and extent of maternal assistance and child involvement during planning were examined. Sixteen 4 to 5-year-old and sixteen 6 to 7-year-old children and their mothers participated in three problem-solving sessions (i.e., child-only pre-test and post-test, and mother-child interaction) that involved constructing a toy from multiple pieces using a pictorial, step-by-step plan. Older children were more planful than younger children during all the planning sessions. Mothers planning with younger children assumed greater responsibility for establishing joint attentional episodes than mothers planning with older children. Results indicate that mothers tailor their guidance on joint planning tasks in relation to developmental needs, and that an important aspect of these efforts is the establishment and maintenance of joint attention.


2005 ◽  
Vol 29 (3) ◽  
pp. 259-263 ◽  
Author(s):  
Michael Morales ◽  
Peter Mundy ◽  
Mary Crowson ◽  
A. Rebecca Neal ◽  
Christine Delgado

2002 ◽  
Vol 14 (6) ◽  
pp. 913-921 ◽  
Author(s):  
Stacey M. Schaefer ◽  
Daren C. Jackson ◽  
Richard J. Davidson ◽  
Geoffrey K. Aguirre ◽  
Daniel Y. Kimberg ◽  
...  

Lesion and neuroimaging studies suggest the amygdala is important in the perception and production of negative emotion; however, the effects of emotion regulation on the amygdalar response to negative stimuli remain unknown. Using event-related fMRI, we tested the hypothesis that voluntary modulation of negative emotion is associated with changes in neural activity within the amygdala. Negative and neutral pictures were presented with instructions to either “maintain” the emotional response or “passively view” the picture without regulating the emotion. Each picture presentation was followed by a delay, after which subjects indicated how they currently felt via a response keypad. Consistent with previous reports, greater signal change was observed in the amygdala during the presentation of negative compared to neutral pictures. No significant effect of instruction was found during the picture presentation component of the trial. However, a prolonged increase in signal change was observed in the amygdala when subjects maintained the negative emotional response during the delay following negative picture offset. This increase in amygdalar signal due to the active maintenance of negative emotion was significantly correlated with subjects' self-reported dispositional levels of negative affect. These results suggest that consciously evoked cognitive mechanisms that alter the emotional response of the subject operate, at least in part, by altering the degree of neural activity within the amygdala.


2021 ◽  
Vol 15 ◽  
Author(s):  
Omar Eldardeer ◽  
Jonas Gonzalez-Billandon ◽  
Lukas Grasse ◽  
Matthew Tata ◽  
Francesco Rea

One of the fundamental prerequisites for effective collaborations between interactive partners is the mutual sharing of the attentional focus on the same perceptual events. This is referred to as joint attention. In psychological, cognitive, and social sciences, its defining elements have been widely pinpointed. Also the field of human-robot interaction has extensively exploited joint attention which has been identified as a fundamental prerequisite for proficient human-robot collaborations. However, joint attention between robots and human partners is often encoded in prefixed robot behaviours that do not fully address the dynamics of interactive scenarios. We provide autonomous attentional behaviour for robotics based on a multi-sensory perception that robustly relocates the focus of attention on the same targets the human partner attends. Further, we investigated how such joint attention between a human and a robot partner improved with a new biologically-inspired memory-based attention component. We assessed the model with the humanoid robot iCub involved in performing a joint task with a human partner in a real-world unstructured scenario. The model showed a robust performance on capturing the stimulation, making a localisation decision in the right time frame, and then executing the right action. We then compared the attention performance of the robot against the human performance when stimulated from the same source across different modalities (audio-visual and audio only). The comparison showed that the model is behaving with temporal dynamics compatible with those of humans. This provides an effective solution for memory-based joint attention in real-world unstructured environments. Further, we analyzed the localisation performances (reaction time and accuracy), the results showed that the robot performed better in an audio-visual condition than an audio only condition. The performance of the robot in the audio-visual condition was relatively comparable with the behaviour of the human participants whereas it was less efficient in audio-only localisation. After a detailed analysis of the internal components of the architecture, we conclude that the differences in performance are due to egonoise which significantly affects the audio-only localisation performance.


2021 ◽  
Vol 118 (12) ◽  
pp. e2021474118
Author(s):  
Cameron T. Ellis ◽  
Lena J. Skalaban ◽  
Tristan S. Yates ◽  
Nicholas B. Turk-Browne

Young infants learn about the world by overtly shifting their attention to perceptually salient events. In adults, attention recruits several brain regions spanning the frontal and parietal lobes. However, it is unclear whether these regions are sufficiently mature in infancy to support attention and, more generally, how infant attention is supported by the brain. We used event-related functional magnetic resonance imaging (fMRI) in 24 sessions from 20 awake behaving infants 3 mo to 12 mo old while they performed a child-friendly attentional cuing task. A target was presented to either the left or right of the infant’s fixation, and offline gaze coding was used to measure the latency with which they saccaded to the target. To manipulate attention, a brief cue was presented before the target in three conditions: on the same side as the upcoming target (valid), on the other side (invalid), or on both sides (neutral). All infants were faster to look at the target on valid versus invalid trials, with valid faster than neutral and invalid slower than neutral, indicating that the cues effectively captured attention. We then compared the fMRI activity evoked by these trial types. Regions of adult attention networks activated more strongly for invalid than valid trials, particularly frontal regions. Neither behavioral nor neural effects varied by infant age within the first year, suggesting that these regions may function early in development to support the orienting of attention. Together, this furthers our mechanistic understanding of how the infant brain controls the allocation of attention.


2020 ◽  
Author(s):  
C. T. Ellis ◽  
L. J. Skalaban ◽  
T. S. Yates ◽  
N. B. Turk-Browne

Young infants learn about the world by overtly shifting their attention to perceptually salient events. In adults, attention recruits several brain regions spanning the frontal and parietal lobes. However, these regions are thought to have a protracted maturation and so it is unclear whether they are recruited in infancy and, more generally, how infant attention is supported by the brain. We used event-related fMRI with 24 awake behaving infants 3–12 months old while they performed a child-friendly attentional cuing task. A target was presented to either the left or right of the infant’s fixation and eye-tracking was used to measure the latency with which they saccaded to the target. To manipulate attention, a brief cue was presented before the target in three conditions: on the same side as the upcoming target (valid), on the other side (invalid), or on both sides (neutral). All infants were faster to look at the target on valid versus invalid trials, with valid faster than neutral and invalid slower than neutral, indicating that the cues effectively captured attention. We then compared the fMRI activity evoked by these trial types. Regions of adult attention networks activated more strongly for invalid than valid trials, particularly frontal regions such as anterior cingulate cortex. Neither behavioral nor neural effects varied by infant age within the first year, suggesting that these regions may function early in development to support the reorienting of attention. Together, this furthers our mechanistic understanding of how the infant brain controls the allocation of attention.


2019 ◽  
Author(s):  
Kyveli Kompatsiari ◽  
Francesca Ciardo ◽  
Davide De Tommaso ◽  
Agnieszka Wykowska

The present study aimed at investigating how eye contact established by a humanoid robot affects engagement in human-robot interaction (HRI). To this end, we combined explicit subjective evaluations with implicit measures, i.e. reaction times and eye tracking. More specifically, we employed a gaze cueing paradigm in HRI protocol involving the iCub robot. Critically, before moving its gaze, iCub either established eye contact or not with the user. We investigated the patterns of fixations of participants’ gaze on the robot’s face, joint attention and the subjective ratings of engagement as a function of eye contact or no eye contact. We found that eye contact affected implicit measures of engagement, i.e. longer fixation times on the robot’s face during eye contact, and joint attention elicited only after the robot established eye contact. On the contrary, explicit measures of engagement with the robot did not vary across conditions. Our results highlight the value of combining explicit with implicit measures in an HRI protocol in order to unveil underlying human cognitive mechanisms, which might be at stake during the interactions. These mechanisms could be crucial for establishing an effective and engaging HRI, and could potentially provide guidelines to the robotics community with respect to better robot design.


Sign in / Sign up

Export Citation Format

Share Document