scholarly journals Detecting communicative intent in a computerised test of joint attention

PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e2899 ◽  
Author(s):  
Nathan Caruana ◽  
Genevieve McArthur ◽  
Alexandra Woolgar ◽  
Jon Brock

The successful navigation of social interactions depends on a range of cognitive faculties—including the ability to achieve joint attention with others to share information and experiences. We investigated the influence that intention monitoring processes have on gaze-following response times during joint attention. We employed a virtual reality task in which 16 healthy adults engaged in a collaborative game with a virtual partner to locate a target in a visual array. In theSearchtask, the virtual partner was programmed to engage in non-communicative gaze shifts in search of the target, establish eye contact, and then display a communicative gaze shift to guide the participant to the target. In theNoSearchtask, the virtual partner simply established eye contact and then made a single communicative gaze shift towards the target (i.e., there were no non-communicative gaze shifts in search of the target). Thus, only the Search task required participants to monitor their partner’s communicative intent before responding to joint attention bids. We found that gaze following was significantly slower in the Search task than the NoSearch task. However, the same effect on response times was not observed when participants completed non-social control versions of the Search and NoSearch tasks, in which the avatar’s gaze was replaced by arrow cues. These data demonstrate that the intention monitoring processes involved in differentiating communicative and non-communicative gaze shifts during the Search task had a measurable influence on subsequent joint attention behaviour. The empirical and methodological implications of these findings for the fields of autism and social neuroscience will be discussed.

2016 ◽  
Author(s):  
Nathan Caruana ◽  
Genevieve McArthur ◽  
Alexandra Woolgar ◽  
Jon Brock

The successful navigation of social interactions depends on a range of cognitive faculties – including the ability to achieve joint attention with others to share information and experiences. We investigated the influence that intention monitoring processes have on gaze-following response times during joint attention. We employed a virtual reality task in which 16 healthy adults engaged in a collaborative game with a virtual partner to locate a target in a visual array. In the Search task, the virtual partner was programmed to engage in non-communicative gaze shifts in search of the target, establish eye contact, and then display a communicative gaze shift to guide the participant to the target. In the NoSearch task, the virtual partner simply established eye contact and then made a single communicative gaze shift towards the target (i.e., there were no non-communicative gaze shifts in search of the target). Thus, only the Search task required participants to monitor their partner’s communicative intent before responding to joint attention bids. We found that gaze following was significantly slower in the Search task than the NoSearch task. However, the same effect on response times was not observed when participants completed non-social control versions of the Search and NoSearch tasks, in which the avatar’s gaze was replaced by arrow cues. These data demonstrate that the intention monitoring processes involved in differentiating communicative and non-communicative gaze shifts during the Search task had a measureable influence on subsequent joint attention behaviour. The empirical and methodological implications of these findings for the fields of autism and social neuroscience will be discussed.


2016 ◽  
Author(s):  
Nathan Caruana ◽  
Genevieve McArthur ◽  
Alexandra Woolgar ◽  
Jon Brock

The successful navigation of social interactions depends on a range of cognitive faculties – including the ability to achieve joint attention with others to share information and experiences. We investigated the influence that intention monitoring processes have on gaze-following response times during joint attention. We employed a virtual reality task in which 16 healthy adults engaged in a collaborative game with a virtual partner to locate a target in a visual array. In the Search task, the virtual partner was programmed to engage in non-communicative gaze shifts in search of the target, establish eye contact, and then display a communicative gaze shift to guide the participant to the target. In the NoSearch task, the virtual partner simply established eye contact and then made a single communicative gaze shift towards the target (i.e., there were no non-communicative gaze shifts in search of the target). Thus, only the Search task required participants to monitor their partner’s communicative intent before responding to joint attention bids. We found that gaze following was significantly slower in the Search task than the NoSearch task. However, the same effect on response times was not observed when participants completed non-social control versions of the Search and NoSearch tasks, in which the avatar’s gaze was replaced by arrow cues. These data demonstrate that the intention monitoring processes involved in differentiating communicative and non-communicative gaze shifts during the Search task had a measureable influence on subsequent joint attention behaviour. The empirical and methodological implications of these findings for the fields of autism and social neuroscience will be discussed.


2019 ◽  
Vol 72 (8) ◽  
pp. 2068-2083 ◽  
Author(s):  
Nathan Caruana ◽  
Kiley Seymour ◽  
Jon Brock ◽  
Robyn Langdon

This study investigated social cognition in schizophrenia using a virtual reality paradigm to capture the dynamic processes of evaluating and responding to eye gaze as an intentional communicative cue. A total of 21 patients with schizophrenia and 21 age-, gender-, and IQ-matched healthy controls completed an interactive computer game with an on-screen avatar that participants believed was controlled by an off-screen partner. On social trials, participants were required to achieve joint attention by correctly interpreting and responding to gaze cues. Participants also completed non-social trials in which they responded to an arrow cue within the same task context. While patients and controls took equivalent time to process communicative intent from gaze shifts, patients made significantly more errors than controls when responding to the directional information conveyed by gaze, but not arrow, cues. Despite no differences in response times to gaze cues between groups, patients were significantly slower than controls when responding to arrow cues. This is the opposite pattern of results previously observed in autistic adults using the same task and suggests that, despite general impairments in attention orienting or oculomotor control, patients with schizophrenia demonstrate a facilitation effect when responding to communicative gaze cues. Findings indicate a hyper-responsivity to gaze cues of communicative intent in schizophrenia. The possible effects of self-referential biases when evaluating gaze direction are discussed, as are clinical implications.


2019 ◽  
Vol 286 (1896) ◽  
pp. 20182746 ◽  
Author(s):  
Mitsuhiko Ishikawa ◽  
Shoji Itakura

According to the natural pedagogy theory, infant gaze following is based on an understanding of the communicative intent of specific ostensive cues. However, it has remained unclear how eye contact affects this understanding and why it induces gaze following behaviour. In this study, we examined infant arousal in different gaze following contexts and whether arousal levels during eye contact predict gaze following. Twenty-five infants, ages 9–10 months participated in this study. They watched a video of an actress gazing towards one of two objects and then either looking directly into the camera to make eye contact or not showing any communicative intent. We found that eye contact led to an elevation in the infants' heart rates (HRs) and that HR during eye contact was predictive of later gaze following. Furthermore, increases in HR predicted gaze following whether it was accompanied by communicative cues or not. These findings suggest that infant gaze following behaviour is associated with both communicative cues and physiological arousal.


2008 ◽  
Vol 100 (4) ◽  
pp. 1848-1867 ◽  
Author(s):  
Sigrid M. C. I. van Wetter ◽  
A. John van Opstal

Such perisaccadic mislocalization is maximal in the direction of the saccade and varies systematically with the target-saccade onset delay. We have recently shown that under head-fixed conditions perisaccadic errors do not follow the quantitative predictions of current visuomotor models that explain these mislocalizations in terms of spatial updating. These models all assume sluggish eye-movement feedback and therefore predict that errors should vary systematically with the amplitude and kinematics of the intervening saccade. Instead, we reported that errors depend only weakly on the saccade amplitude. An alternative explanation for the data is that around the saccade the perceived target location undergoes a uniform transient shift in the saccade direction, but that the oculomotor feedback is, on average, accurate. This “ visual shift” hypothesis predicts that errors will also remain insensitive to kinematic variability within much larger head-free gaze shifts. Here we test this prediction by presenting a brief visual probe near the onset of gaze saccades between 40 and 70° amplitude. According to models with inaccurate gaze-motor feedback, the expected perisaccadic errors for such gaze shifts should be as large as 30° and depend heavily on the kinematics of the gaze shift. In contrast, we found that the actual peak errors were similar to those reported for much smaller saccadic eye movements, i.e., on average about 10°, and that neither gaze-shift amplitude nor kinematics plays a systematic role. Our data further corroborate the visual origin of perisaccadic mislocalization under open-loop conditions and strengthen the idea that efferent feedback signals in the gaze-control system are fast and accurate.


2007 ◽  
Vol 97 (2) ◽  
pp. 1149-1162 ◽  
Author(s):  
Mario Prsa ◽  
Henrietta L. Galiana

Models of combined eye-head gaze shifts all aim to realistically simulate behaviorally observed movement dynamics. One of the most problematic features of such models is their inability to determine when a saccadic gaze shift should be initiated and when it should be ended. This is commonly referred to as the switching mechanism mediated by omni-directional pause neurons (OPNs) in the brain stem. Proposed switching strategies implemented in existing gaze control models all rely on a sensory error between instantaneous gaze position and the spatial target. Accordingly, gaze saccades are initiated after presentation of an eccentric visual target and subsequently terminated when an internal estimate of gaze position becomes nearly equal to that of the target. Based on behavioral observations, we demonstrate that such a switching mechanism is insufficient and is unable to explain certain types of movements. We propose an improved hypothesis for how the OPNs control gaze shifts based on a visual-vestibular interaction of signals known to be carried on anatomical projections to the OPN area. The approach is justified by the analysis of recorded gaze shifts interrupted by a head brake in animal subjects and is demonstrated by implementing the switching mechanism in an anatomically based gaze control model. Simulated performance reveals that a weighted sum of three signals: gaze motor error, head velocity, and eye velocity, hypothesized as inputs to OPNs, successfully reproduces diverse behaviorally observed eye-head movements that no other existing model can account for.


1995 ◽  
Vol 73 (4) ◽  
pp. 1632-1652 ◽  
Author(s):  
J. O. Phillips ◽  
L. Ling ◽  
A. F. Fuchs ◽  
C. Siebold ◽  
J. J. Plorde

1. We studied horizontal eye and head movements in three monkeys that were trained to direct their gaze (eye position in space) toward jumping targets while their heads were both fixed and free to rotate about a vertical axis. We considered all gaze movements that traveled > or = 80% of the distance to the new visual target. 2. The relative contributions and metrics of eye and head movements to the gaze shift varied considerably from animal to animal and even within animals. Head movements could be initiated early or late and could be large or small. The eye movements of some monkeys showed a consistent decrease in velocity as the head accelerated, whereas others did not. Although all gaze shifts were hypometric, they were more hypometric in some monkeys than in others. Nevertheless, certain features of the gaze shift were identifiable in all monkeys. To identify those we analyzed gaze, eye in head position, and head position, and their velocities at three points in time during the gaze shift: 1) when the eye had completed its initial rotation toward the target, 2) when the initial gaze shift had landed, and 3) when the head movement was finished. 3. For small gaze shifts (< 20 degrees) the initial gaze movement consisted entirely of an eye movement because the head did not move. As gaze shifts became larger, the eye movement contribution saturated at approximately 30 degrees and the head movement contributed increasingly to the initial gaze movement. For the largest gaze shifts, the eye usually began counterrolling or remained stable in the orbit before gaze landed. During the interval between eye and gaze end, the head alone carried gaze to completion. Finally, when the head movement landed, it was almost aimed at the target and the eye had returned to within 10 +/- 7 degrees, mean +/- SD, of straight ahead. Between the end of the gaze shift and the end of the head movement, gaze remained stable in space or a small correction saccade occurred. 4. Gaze movements < 20 degrees landed accurately on target whether the head was fixed or free. For larger target movements, both head-free and head-fixed gaze shifts became increasingly hypometric. Head-free gaze shifts were more accurate, on average, but also more variable. This suggests that gaze is controlled in a different way with the head free. For target amplitudes < 60 degrees, head position was hypometric but the error was rather constant at approximately 10 degrees.(ABSTRACT TRUNCATED AT 400 WORDS)


2021 ◽  
Author(s):  
Kyveli Kompatsiari ◽  
Francesco Bossi ◽  
Agnieszka Wykowska

Eye contact established by a human partner has been shown to affect various cognitive processes of the receiver. However, little is known about humans’ responses to eye contact established by a humanoid robot. Here, we aimed at examining humans’ oscillatory brain response to eye contact with a humanoid robot. Eye contact (or lack thereof) was embedded in a gaze cueing task and preceded the phase of gaze-related attentional orienting. In addition to examining the effect of eye contact on the recipient, we also tested its impact on gaze cueing effects. Results showed that participants rated eye contact as more engaging and responded with higher desynchronization of alpha-band activity in left fronto-central and central electrode clusters when the robot established eye contact with them, compared to no eye contact condition. However, eye contact did not modulate gaze cueing effects. The results are interpreted in terms of the functional roles involved in alpha central rhythms (potentially interpretable also as mu rhythm), including joint attention and engagement in social interaction.


Sign in / Sign up

Export Citation Format

Share Document