scholarly journals Irrelevant Robot Signals in a Categorization Task Induce Cognitive Conflict in Performance, Eye-trajectories, the N2 ERP-EEG component and Frontal Theta Oscillations

2021 ◽  
Author(s):  
Jairo Pérez-Osorio ◽  
Abdulaziz Abubshait ◽  
Agnieszka Wykowska

Understanding others’ nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. While gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others’ mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot “moved” the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object’s color. We expected that incongruent head-cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 event-related potential (ERP) of the EEG signals as well as higher Event-Related Spectral Perturbation (ERSP) amplitudes (Study 3) for incongruent trials compared to congruent trials. Our findings reveal that behavioral, ocular and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.

2021 ◽  
pp. 1-19
Author(s):  
Jairo Perez-Osorio ◽  
Abdulaziz Abubshait ◽  
Agnieszka Wykowska

Abstract Understanding others' nonverbal behavior is essential for social interaction, as it allows, among others, to infer mental states. Although gaze communication, a well-established nonverbal social behavior, has shown its importance in inferring others' mental states, not much is known about the effects of irrelevant gaze signals on cognitive conflict markers during collaborative settings. Here, participants completed a categorization task where they categorized objects based on their color while observing images of a robot. On each trial, participants observed the robot iCub grasping an object from a table and offering it to them to simulate a handover. Once the robot “moved” the object forward, participants were asked to categorize the object according to its color. Before participants were allowed to respond, the robot made a lateral head/gaze shift. The gaze shifts were either congruent or incongruent with the object's color. We expected that incongruent head cues would induce more errors (Study 1), would be associated with more curvature in eye-tracking trajectories (Study 2), and induce larger amplitude in electrophysiological markers of cognitive conflict (Study 3). Results of the three studies show more oculomotor interference as measured in error rates (Study 1), larger curvatures eye-tracking trajectories (Study 2), and higher amplitudes of the N2 ERP of the EEG signals as well as higher event-related spectral perturbation amplitudes (Study 3) for incongruent trials compared with congruent trials. Our findings reveal that behavioral, ocular, and electrophysiological markers can index the influence of irrelevant signals during goal-oriented tasks.


2019 ◽  
Author(s):  
T. Stephani ◽  
K. Kirk Driller ◽  
O. Dimigen ◽  
W. Sommer

AbstractEye contact is a salient social cue which is assumed to influence early brain processes involved in face perception. The N170 component in the event-related potential (ERP) has frequently been reported to be larger to faces with an averted rather than direct gaze towards the observer. In most studies, however, this effect has been investigated in comparatively artificial, passive settings where participants were instructed to fixate their gaze while observing occasional gaze changes in stimulus faces. Yet, it is unclear whether similar mechanisms are in place during naturalistic gaze interactions involving the continuous interplay of directed and averted gaze between the communication partners. To fill this gap, we compared passive viewing of gaze change sequences with an active condition where participants’ own gaze continuously interacted with the gaze of a stimulus face; while recording ERPs and monitoring gaze with eye tracking. In addition, we investigated the relevance of emotional facial expressions for gaze processing. For both passive viewing and active interaction, N170 amplitudes were larger when the gaze of stimulus faces was averted rather than directed at the participants. Furthermore, eye contact decreased P300 amplitudes in both conditions. Emotional facial expression influenced N170 amplitudes but did not elicit an early posterior negativity nor did it interact with gaze direction. We conclude that comparable mechanisms of gaze perception are in place in gaze interaction as compared to passive viewing, encouraging the further study of the eye contact effect in naturalistic settings.


2020 ◽  
Vol 10 (1) ◽  
pp. 24
Author(s):  
Merve Bulut ◽  
Burak Erdeniz

Sex categorization from faces is a crucial ability for humans and non-human primates for various social and cognitive processes. In the current study, we performed two eye tracking experiments to examine the gaze behavior of participants during a sex categorization task in which participants categorize face pictures from their own-race (Caucasian), other-race (Asian) and other-species (chimpanzee). In experiment 1, we presented the faces in an upright position to 16 participants, and found a strong other-race and other-species effect. In experiment 2, the same faces were shown to 24 naïve participants in an upside-down (inverted) position, which showed that, although the other-species effect was intact, other-race effect disappeared. Moreover, eye-tracking analysis revealed that in the upright position, the eye region was the first and most widely viewed area for all face categories. However, during upside-down viewing, participants’ attention directed more towards the eye region of the own-race and own-species faces, whereas the nose received more attention in other-race and other-species faces. Overall results suggest that other-race faces were processed less holistically compared to own-race faces and this could affect both participants’ behavioral performance and gaze behavior during sex categorization. Finally, gaze data suggests that the gaze of participants shifts from the eye to the nose region with decreased racial and species-based familiarity.


2018 ◽  
Author(s):  
Foyzul Rahman ◽  
Sabrina Javed ◽  
Ian Apperly ◽  
Peter Hansen ◽  
Carol Holland ◽  
...  

Age-related decline in Theory of Mind (ToM) may be due to waning executive control, which is necessary for resolving conflict when reasoning about others’ mental states. We assessed how older (OA; n=50) versus younger adults (YA; n=50) were affected by three theoretically relevant sources of conflict within ToM: competing Self-Other perspectives; competing cued locations and outcome knowledge. We examined which best accounted for age-related difficulty with ToM. Our data show unexpected similarity between age groups when representing a belief incongruent with one’s own. Individual differences in attention and motor response speed best explained the degree of conflict experienced through conflicting Self-Other perspectives. However, OAs were disproportionately affected by managing conflict between cued locations. Age and spatial working memory were most relevant for predicting the magnitude of conflict elicited by conflicting cued locations. We suggest that previous studies may have underestimated OA’s ToM proficiency by including unnecessary conflict in ToM tasks.


Author(s):  
Piercarlo Dondi ◽  
Marco Porta ◽  
Angelo Donvito ◽  
Giovanni Volpe

AbstractInteractive and immersive technologies can significantly enhance the fruition of museums and exhibits. Several studies have proved that multimedia installations can attract visitors, presenting cultural and scientific information in an appealing way. In this article, we present our workflow for achieving a gaze-based interaction with artwork imagery. We designed both a tool for creating interactive “gaze-aware” images and an eye tracking application conceived to interact with those images with the gaze. Users can display different pictures, perform pan and zoom operations, and search for regions of interest with associated multimedia content (text, image, audio, or video). Besides being an assistive technology for motor impaired people (like most gaze-based interaction applications), our solution can also be a valid alternative to the common touch screen panels present in museums, in accordance with the new safety guidelines imposed by the COVID-19 pandemic. Experiments carried out with a panel of volunteer testers have shown that the tool is usable, effective, and easy to learn.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Takao Fukui ◽  
Mrinmoy Chakrabarty ◽  
Misako Sano ◽  
Ari Tanaka ◽  
Mayuko Suzuki ◽  
...  

AbstractEye movements toward sequentially presented face images with or without gaze cues were recorded to investigate whether those with ASD, in comparison to their typically developing (TD) peers, could prospectively perform the task according to gaze cues. Line-drawn face images were sequentially presented for one second each on a laptop PC display, and the face images shifted from side-to-side and up-and-down. In the gaze cue condition, the gaze of the face image was directed to the position where the next face would be presented. Although the participants with ASD looked less at the eye area of the face image than their TD peers, they could perform comparable smooth gaze shift to the gaze cue of the face image in the gaze cue condition. This appropriate gaze shift in the ASD group was more evident in the second half of trials in than in the first half, as revealed by the mean proportion of fixation time in the eye area to valid gaze data in the early phase (during face image presentation) and the time to first fixation on the eye area. These results suggest that individuals with ASD may benefit from the short-period trial experiment by enhancing the usage of gaze cue.


2008 ◽  
Vol 100 (4) ◽  
pp. 1848-1867 ◽  
Author(s):  
Sigrid M. C. I. van Wetter ◽  
A. John van Opstal

Such perisaccadic mislocalization is maximal in the direction of the saccade and varies systematically with the target-saccade onset delay. We have recently shown that under head-fixed conditions perisaccadic errors do not follow the quantitative predictions of current visuomotor models that explain these mislocalizations in terms of spatial updating. These models all assume sluggish eye-movement feedback and therefore predict that errors should vary systematically with the amplitude and kinematics of the intervening saccade. Instead, we reported that errors depend only weakly on the saccade amplitude. An alternative explanation for the data is that around the saccade the perceived target location undergoes a uniform transient shift in the saccade direction, but that the oculomotor feedback is, on average, accurate. This “ visual shift” hypothesis predicts that errors will also remain insensitive to kinematic variability within much larger head-free gaze shifts. Here we test this prediction by presenting a brief visual probe near the onset of gaze saccades between 40 and 70° amplitude. According to models with inaccurate gaze-motor feedback, the expected perisaccadic errors for such gaze shifts should be as large as 30° and depend heavily on the kinematics of the gaze shift. In contrast, we found that the actual peak errors were similar to those reported for much smaller saccadic eye movements, i.e., on average about 10°, and that neither gaze-shift amplitude nor kinematics plays a systematic role. Our data further corroborate the visual origin of perisaccadic mislocalization under open-loop conditions and strengthen the idea that efferent feedback signals in the gaze-control system are fast and accurate.


2017 ◽  
Vol 17 (3) ◽  
pp. 257-266 ◽  
Author(s):  
Azam Majooni ◽  
Mona Masood ◽  
Amir Akhavan

The basic premise of this research is investigating the effect of layout on the comprehension and cognitive load of the viewers in the information graphics. The term ‘Layout’ refers to the arrangement and organization of the visual and textual elements in a graphical design. The experiment conducted in this study is designed based on two stories and each one of these stories is presented with two different layouts. During the experiment, eye-tracking devices are applied to collect the gaze data including the eye movement data and pupil diameter fluctuation. In the research on the modification of the layouts, contents of each story are narrated using identical visual and textual elements. The analysis of eye-tracking data provides quantitative evidence concerning the change of layout in each story and its effect on the comprehension of participants and variation of their cognitive load. In conclusion, it can be claimed that the comprehension from the zigzag form of the layout was higher with a less imposed cognitive load.


1995 ◽  
Vol 73 (4) ◽  
pp. 1632-1652 ◽  
Author(s):  
J. O. Phillips ◽  
L. Ling ◽  
A. F. Fuchs ◽  
C. Siebold ◽  
J. J. Plorde

1. We studied horizontal eye and head movements in three monkeys that were trained to direct their gaze (eye position in space) toward jumping targets while their heads were both fixed and free to rotate about a vertical axis. We considered all gaze movements that traveled > or = 80% of the distance to the new visual target. 2. The relative contributions and metrics of eye and head movements to the gaze shift varied considerably from animal to animal and even within animals. Head movements could be initiated early or late and could be large or small. The eye movements of some monkeys showed a consistent decrease in velocity as the head accelerated, whereas others did not. Although all gaze shifts were hypometric, they were more hypometric in some monkeys than in others. Nevertheless, certain features of the gaze shift were identifiable in all monkeys. To identify those we analyzed gaze, eye in head position, and head position, and their velocities at three points in time during the gaze shift: 1) when the eye had completed its initial rotation toward the target, 2) when the initial gaze shift had landed, and 3) when the head movement was finished. 3. For small gaze shifts (< 20 degrees) the initial gaze movement consisted entirely of an eye movement because the head did not move. As gaze shifts became larger, the eye movement contribution saturated at approximately 30 degrees and the head movement contributed increasingly to the initial gaze movement. For the largest gaze shifts, the eye usually began counterrolling or remained stable in the orbit before gaze landed. During the interval between eye and gaze end, the head alone carried gaze to completion. Finally, when the head movement landed, it was almost aimed at the target and the eye had returned to within 10 +/- 7 degrees, mean +/- SD, of straight ahead. Between the end of the gaze shift and the end of the head movement, gaze remained stable in space or a small correction saccade occurred. 4. Gaze movements < 20 degrees landed accurately on target whether the head was fixed or free. For larger target movements, both head-free and head-fixed gaze shifts became increasingly hypometric. Head-free gaze shifts were more accurate, on average, but also more variable. This suggests that gaze is controlled in a different way with the head free. For target amplitudes < 60 degrees, head position was hypometric but the error was rather constant at approximately 10 degrees.(ABSTRACT TRUNCATED AT 400 WORDS)


2018 ◽  
Vol 71 (9) ◽  
pp. 1860-1872 ◽  
Author(s):  
Stephen RH Langton ◽  
Alex H McIntyre ◽  
Peter JB Hancock ◽  
Helmut Leder

Research has established that a perceived eye gaze produces a concomitant shift in a viewer’s spatial attention in the direction of that gaze. The two experiments reported here investigate the extent to which the nature of the eye movement made by the gazer contributes to this orienting effect. On each trial in these experiments, participants were asked to make a speeded response to a target that could appear in a location toward which a centrally presented face had just gazed (a cued target) or in a location that was not the recipient of a gaze (an uncued target). The gaze cues consisted of either fast saccadic eye movements or slower smooth pursuit movements. Cued targets were responded to faster than uncued targets, and this gaze-cued orienting effect was found to be equivalent for each type of gaze shift both when the gazes were un-predictive of target location (Experiment 1) and counterpredictive of target location (Experiment 2). The results offer no support for the hypothesis that motion speed modulates gaze-cued orienting. However, they do suggest that motion of the eyes per se, regardless of the type of movement, may be sufficient to trigger an orienting effect.


Sign in / Sign up

Export Citation Format

Share Document