Neural Systems, Gaze Following, and the Development of Joint Attention

2019 ◽  
Author(s):  
Silvia Spadacenta ◽  
Peter W. Dicke ◽  
Peter Thier

ABSTRACTThe ability to extract the direction of the other’s gaze allows us to shift our attention to an object of interest to the other and to establish joint attention. By mapping one’s own expectations, desires and intentions on the object of joint attention, humans develop a Theory of (the other’s) Mind (TOM), a functional sequence possibly disrupted in autism. Although old world monkeys probably do not possess a TOM, they follow the other’s gaze and they establish joint attention. Gaze following of both humans and old world monkeys fulfills Fodor’s criteria of a domain specific function and is orchestrated by very similar cortical architectures, strongly suggesting homology. Also new world monkeys, a primate suborder that split from the old world monkey line about 35 million years ago, have complex social structures. One member of this group, the common marmoset (Callithrix jacchus), has received increasing interest as a potential model in studies of normal and disturbed human social cognition. Marmosets are known to follow human head-gaze. However, the question is if they use gaze following to establish joint attention with conspecifics. Here we show that this is indeed the case. In a free choice task, head-restrained marmosets prefer objects gazed at by a conspecific and, moreover, they exhibit considerably shorter choice reaction times for the same objects. These findings support the assumption of an evolutionary old domain specific faculty shared within the primate order and they underline the potential value of marmosets in studies of normal and disturbed joint attention.HIGHLIGHTSCommon marmosets follow the head gaze of conspecifics in order to establish joint attention.Brief exposures to head gaze are sufficient to reallocate an animal’s attention.The tendency to follow the other’s gaze competes with the attractional binding of the conspecific’s face


2016 ◽  
Author(s):  
Nathan Caruana ◽  
Genevieve McArthur ◽  
Alexandra Woolgar ◽  
Jon Brock

The successful navigation of social interactions depends on a range of cognitive faculties – including the ability to achieve joint attention with others to share information and experiences. We investigated the influence that intention monitoring processes have on gaze-following response times during joint attention. We employed a virtual reality task in which 16 healthy adults engaged in a collaborative game with a virtual partner to locate a target in a visual array. In the Search task, the virtual partner was programmed to engage in non-communicative gaze shifts in search of the target, establish eye contact, and then display a communicative gaze shift to guide the participant to the target. In the NoSearch task, the virtual partner simply established eye contact and then made a single communicative gaze shift towards the target (i.e., there were no non-communicative gaze shifts in search of the target). Thus, only the Search task required participants to monitor their partner’s communicative intent before responding to joint attention bids. We found that gaze following was significantly slower in the Search task than the NoSearch task. However, the same effect on response times was not observed when participants completed non-social control versions of the Search and NoSearch tasks, in which the avatar’s gaze was replaced by arrow cues. These data demonstrate that the intention monitoring processes involved in differentiating communicative and non-communicative gaze shifts during the Search task had a measureable influence on subsequent joint attention behaviour. The empirical and methodological implications of these findings for the fields of autism and social neuroscience will be discussed.


Author(s):  
Lucas Battich ◽  
Isabelle Garzorz ◽  
Basil Wahn ◽  
Ophelia Deroy

AbstractHumans coordinate their focus of attention with others, either by gaze following or prior agreement. Though the effects of joint attention on perceptual and cognitive processing tend to be examined in purely visual environments, they should also show in multisensory settings. According to a prevalent hypothesis, joint attention enhances visual information encoding and processing, over and above individual attention. If two individuals jointly attend to the visual components of an audiovisual event, this should affect the weighing of visual information during multisensory integration. We tested this prediction in this preregistered study, using the well-documented sound-induced flash illusions, where the integration of an incongruent number of visual flashes and auditory beeps results in a single flash being seen as two (fission illusion) and two flashes as one (fusion illusion). Participants were asked to count flashes either alone or together, and expected to be less prone to both fission and fusion illusions when they jointly attended to the visual targets. However, illusions were as frequent when people attended to the flashes alone or with someone else, even though they responded faster during joint attention. Our results reveal the limitations of the theory that joint attention enhances visual processing as it does not affect temporal audiovisual integration.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Silvia Spadacenta ◽  
Peter W. Dicke ◽  
Peter Thier

Abstract The ability to extract the direction of the other’s gaze allows us to shift our attention to an object of interest to the other and to establish joint attention. By mapping one’s own intentions on the object of joint attention, humans develop a Theory of (the other’s) Mind (TOM), a functional sequence possibly disrupted in autism. Gaze following of both humans and old world monkeys is orchestrated by very similar cortical architectures, strongly suggesting homology. Also new world monkeys, a primate suborder that split from the old world monkey line about 35 million years ago, have complex social structures and one member of this group, the common marmosets (Callithrix jacchus) are known to follow human head-gaze. However, the question is if they use gaze following to establish joint attention with conspecifics. Here we show that this is indeed the case. In a free choice task, head-restrained marmosets prefer objects gazed at by a conspecific and, moreover, they exhibit considerably shorter choice reaction times for the same objects. These findings support the assumption of an evolutionarily old domain specific faculty shared within the primate order and they underline the potential value of marmosets in studies of normal and disturbed joint attention.


Autism ◽  
2021 ◽  
pp. 136236132110619
Author(s):  
Emilia Thorup ◽  
Pär Nyström ◽  
Sven Bölte ◽  
Terje Falck-Ytter

Children with autism spectrum disorder (ASD) display difficulties with response to joint attention in natural settings but often perform comparably to typically developing (TD) children in experimental studies of gaze following. Previous work comparing infants at elevated likelihood for ASD versus TD infants has manipulated aspects of the gaze cueing stimulus (e.g. eyes only versus head and eyes together), but the role the peripheral object being attended to is not known. In this study of infants at elevated likelihood of ASD ( N = 97) and TD infants ( N = 29), we manipulated whether or not a target object was present in the cued area. Performance was assessed at 10, 14, and 18 months, and diagnostic assessment was conducted at age 3 years. The results showed that although infants with later ASD followed gaze to the same extent as TD infants in all conditions, they displayed faster latencies back to the model’s face when (and only when) a peripheral object was absent. These subtle atypicalities in the gaze behaviors directly after gaze following may implicate a different appreciation of the communicative situation in infants with later ASD, despite their ostensively typical gaze following ability. Lay abstract During the first year of life, infants start to align their attention with that of other people. This ability is called joint attention and facilitates social learning and language development. Although children with autism spectrum disorder (ASD) are known to engage less in joint attention compared to other children, several experimental studies have shown that they follow other’s gaze (a requirement for visual joint attention) to the same extent as other children. In this study, infants’ eye movements were measured at age 10, 14, and 18 months while watching another person look in a certain direction. A target object was either present or absent in the direction of the other person’s gaze. Some of the infants were at elevated likelihood of ASD, due to having an older autistic sibling. At age 3 years, infants were assessed for a diagnosis of ASD. Results showed that infants who met diagnostic criteria at 3 years followed gaze to the same extent as other infants. However, they then looked back at the model faster than typically developing infants when no target object was present. When a target object was present, there was no difference between groups. These results may be in line with the view that directly after gaze following, infants with later ASD are less influenced by other people’s gaze when processing the common attentional focus. The study adds to our understanding of both the similarities and differences in looking behaviors between infants who later receive an ASD diagnosis and other infants.


2006 ◽  
Vol 14 (1) ◽  
pp. 53-82 ◽  
Author(s):  
Marianne Gullberg ◽  
Kenneth Holmqvist

This study investigates whether addressees visually attend to speakers’ gestures in interaction and whether attention is modulated by changes in social setting and display size. We compare a live face-to-face setting to two video conditions. In all conditions, the face dominates as a fixation target and only a minority of gestures draw fixations. The social and size parameters affect gaze mainly when combined and in the opposite direction from the predicted with fewer gestures fixated on video than live. Gestural holds and speakers’ gaze at their own gestures reliably attract addressees’ fixations in all conditions. The attraction force of holds is unaffected by changes in social and size parameters, suggesting a bottom-up response, whereas speaker-fixated gestures draw significantly less attention in both video conditions, suggesting a social effect for overt gaze-following and visual joint attention. The study provides and validates a video-based paradigm enabling further experimental but ecologically valid explorations of cross-modal information processing.


1998 ◽  
Vol 21 ◽  
pp. 654 ◽  
Author(s):  
Pamela R. Rollins ◽  
Virginia Marchman ◽  
Jyutika Mehta

1997 ◽  
Vol 111 (3) ◽  
pp. 286-293 ◽  
Author(s):  
Nathan J. Emery ◽  
Erika N. Lorincz ◽  
David I. Perrett ◽  
Michael W. Oram ◽  
Christopher I. Baker

Sign in / Sign up

Export Citation Format

Share Document