scholarly journals Error, rather than its probability, elicits specific electrocortical signatures: a combined EEG-immersive virtual reality study of action observation

2018 ◽  
Vol 120 (3) ◽  
pp. 1107-1118 ◽  
Author(s):  
Rachele Pezzetta ◽  
Valentina Nicolardi ◽  
Emmanuele Tidoni ◽  
Salvatore Maria Aglioti

Detecting errors in one’s own actions, and in the actions of others, is a crucial ability for adaptable and flexible behavior. Studies show that specific EEG signatures underpin the monitoring of observed erroneous actions (error-related negativity, error positivity, mid-frontal theta oscillations). However, the majority of studies on action observation used sequences of trials where erroneous actions were less frequent than correct actions. Therefore, it was not possible to disentangle whether the activation of the performance monitoring system was due to an error, as a violation of the intended goal, or to a surprise/novelty effect, associated with a rare and unexpected event. Combining EEG and immersive virtual reality (IVR-CAVE system), we recorded the neural signal of 25 young adults who observed, in first-person perspective, simple reach-to-grasp actions performed by an avatar aiming for a glass. Importantly, the proportion of erroneous actions was higher than correct actions. Results showed that the observation of erroneous actions elicits the typical electrocortical signatures of error monitoring, and therefore the violation of the action goal is still perceived as a salient event. The observation of correct actions elicited stronger alpha suppression. This confirmed the role of the alpha-frequency band in the general orienting response to novel and infrequent stimuli. Our data provide novel evidence that an observed goal error (the action slip) triggers the activity of the performance-monitoring system even when erroneous actions, which are, typically, relevant events, occur more often than correct actions and thus are not salient because of their rarity. NEW & NOTEWORTHY Activation of the performance-monitoring system (PMS) is typically investigated when errors in a sequence are comparatively rare. However, whether the PMS is activated by errors per se or by their infrequency is not known. Combining EEG-virtual reality techniques, we found that observing frequent (70%) action errors performed by avatars elicits electrocortical error signatures suggesting that deviation from the prediction of how learned actions should correctly deploy, rather than its frequency, is coded in the PMS.

Perception ◽  
2018 ◽  
Vol 47 (5) ◽  
pp. 477-491 ◽  
Author(s):  
Barbara Caola ◽  
Martina Montalti ◽  
Alessandro Zanini ◽  
Antony Leadbetter ◽  
Matteo Martini

Classically, body ownership illusions are triggered by cross-modal synchronous stimulations, and hampered by multisensory inconsistencies. Nonetheless, the boundaries of such illusions have been proven to be highly plastic. In this immersive virtual reality study, we explored whether it is possible to induce a sense of body ownership over a virtual body part during visuomotor inconsistencies, with or without the aid of concomitant visuo-tactile stimulations. From a first-person perspective, participants watched a virtual tube moving or an avatar’s arm moving, with or without concomitant synchronous visuo-tactile stimulations on their hand. Three different virtual arm/tube speeds were also investigated, while all participants kept their real arms still. The subjective reports show that synchronous visuo-tactile stimulations effectively counteract the effect of visuomotor inconsistencies, but at slow arm movements, a feeling of body ownership might be successfully induced even without concomitant multisensory correspondences. Possible therapeutical implications of these findings are discussed.


2013 ◽  
Vol 04 (supp01) ◽  
pp. 1340004 ◽  
Author(s):  
AKIRA KAGEYAMA ◽  
NOBUAKI OHNO ◽  
SHINTARO KAWAHARA ◽  
KAZUO KASHIYAMA ◽  
HIROAKI OHTANI

VFIVE is a scientific visualization application for CAVE-type immersive virtual reality (VR) systems. The source codes are freely available. VFIVE is used as a research tool in various VR systems. It also lays the groundwork for developments of new visualization software for CAVEs. In this paper, we pick up five CAVE systems in four different institutions in Japan. Applications of VFIVE in each CAVE system are summarized. Special emphases will be placed on scientific and technical achievements made possible by VFIVE.


2016 ◽  
Vol 116 (6) ◽  
pp. 2656-2662 ◽  
Author(s):  
M. Fusaro ◽  
G. Tieri ◽  
S. M. Aglioti

Studies have explored behavioral and neural responses to the observation of pain in others. However, much less is known about how taking a physical perspective influences reactivity to the observation of others' pain and pleasure. To explore this issue we devised a novel paradigm in which 24 healthy participants immersed in a virtual reality scenario observed a virtual: needle penetrating (pain), caress (pleasure), or ball touching (neutral) the hand of an avatar seen from a first (1PP)- or a third (3PP)-person perspective. Subjective ratings and physiological responses [skin conductance responses (SCR) and heart rate (HR)] were collected in each trial. All participants reported strong feelings of ownership of the virtual hand only in 1PP. Subjective measures also showed that pain and pleasure were experienced as more salient than neutral. SCR analysis demonstrated higher reactivity in 1PP than in 3PP. Importantly, vicarious pain induced stronger responses with respect to the other conditions in both perspectives. HR analysis revealed equally lower activity during pain and pleasure with respect to neutral. SCR may reflect egocentric perspective, and HR may merely index general arousal. The results suggest that behavioral and physiological indexes of reactivity to seeing others' pain and pleasure were qualitatively similar in 1PP and 3PP. Our paradigm indicates that virtual reality can be used to study vicarious sensation of pain and pleasure without actually delivering any stimulus to participants' real body and to explore behavioral and physiological reactivity when they observe pain and pleasure from ego- and allocentric perspectives.


Author(s):  
Amanda J. Haskins ◽  
Jeff Mentch ◽  
Thomas L. Botch ◽  
Caroline E. Robertson

AbstractVision is an active process. Humans actively sample their sensory environment via saccades, head turns, and body movements. Yet, little is known about active visual processing in real-world environments. Here, we exploited recent advances in immersive virtual reality (VR) and in-headset eye-tracking to show that active viewing conditions impact how humans process complex, real-world scenes. Specifically, we used quantitative, model-based analyses to compare which visual features participants prioritize over others while encoding a novel environment in two experimental conditions: active and passive. In the active condition, participants used head-mounted VR displays to explore 360º scenes from a first-person perspective via self-directed motion (saccades and head turns). In the passive condition, 360º scenes were passively displayed to participants within the VR headset while they were head-restricted. Our results show that signatures of top-down attentional guidance increase in active viewing conditions: active viewers disproportionately allocate their attention to semantically relevant scene features, as compared with passive viewers. We also observed increased signatures of exploratory behavior in eye movements, such as quicker, more entropic fixations during active as compared with passive viewing conditions. These results have broad implications for studies of visual cognition, suggesting that active viewing influences every aspect of gaze behavior – from the way we move our eyes to what we choose to attend to – as we construct a sense of place in a real-world environment.Significance StatementEye-tracking in immersive virtual reality offers an unprecedented opportunity to study human gaze behavior under naturalistic viewing conditions without sacrificing experimental control. Here, we advanced this new technique to show how humans deploy attention as they encode a diverse set of 360º, real-world scenes, actively explored from a first-person perspective using head turns and saccades. Our results build on classic studies in psychology, showing that active, as compared with passive, viewing conditions fundamentally alter perceptual processing. Specifically, active viewing conditions increase information-seeking behavior in humans, producing faster, more entropic fixations, which are disproportionately deployed to scene areas that are rich in semantic meaning. In addition, our results offer key benchmark measurements of gaze behavior in 360°, naturalistic environments.


2021 ◽  
Vol 2 ◽  
Author(s):  
Yoshihiro Itaguchi

While studies have increasingly used virtual hands and objects in virtual environments to investigate various processes of psychological phenomena, conflicting findings have been reported even at the most basic level of perception and action. To reconcile this situation, the present study aimed 1) to assess biases in size perception of a virtual hand using a strict psychophysical method and 2) to provide firm and conclusive evidence of the kinematic characteristics of reach-to-grasp movements with various virtual effectors (whole hand or fingertips only, with or without tactile feedback of a target object). Experiments were conducted using a consumer immersive virtual reality device. In a size judgment task, participants judged whether a presented virtual hand or an everyday object was larger than the remembered size. The results showed the same amplitude of underestimation (approximately 5%) for the virtual hand and the object, and no influence of object location, visuo-proprioceptive congruency, or short-term experience of controlling the virtual hand. Furthermore, there was a moderate positive correlation between actual hand size and perception bias. Analyses of reach-to-grasp movements revealed longer movement times and larger maximum grip aperture (MGA) for a virtual, as opposed to a physical, environment, but the MGA did not change when grasping was performed without tactile feedback. The MGA appeared earlier in the time course of grasping movements in all virtual reality conditions, regardless of the type of virtual effector. These findings confirm and corroborate previous evidence and may contribute to the field of virtual hand interfaces for interactions with virtual worlds.


2020 ◽  
Author(s):  
Heather Iriye ◽  
Peggy L. St. Jacques

We typically experience the world from a first-person perspective (1PP) but can sometimes experience events from a third-person perspective (3PP) much as an observer might see us. Little is known about how visual perspective influences the formation of memories for events. We developed an immersive virtual reality paradigm to examine how visual perspective during encoding influences memories. Across two studies, participants explored immersive virtual environments from first-person and third-person avatar perspectives while wearing an Oculus Rift headset. Memory was tested immediately (Study One and Study Two) and following a one-week delay (Study Two). We assessed the accuracy of visual memory using cued recall questions and spatial memory by asking participants to draw maps of the layout of each environment (Study One and Study Two). Additional phenomenological ratings were included to assess visual perspective during remembering (Study Two). There were no differences in the accuracy of visual information across the two studies, but 3PP experiences increased spatial memory accuracy compared to 1PP experiences. Our results also demonstrate that 3PP experiences create 3PP memories, as reflected by an increase in subjective ratings of observer-like perspectives during remembering. In sum, visual perspective during memory formation influences the accuracy of spatial but not visual information, and the vantage point of memories during remembering.


2020 ◽  
Author(s):  
Manabu Yoshimura ◽  
Hiroshi Kurumadani ◽  
Junya Hirata ◽  
Hiroshi Osaka ◽  
Katsutoshi Senoo ◽  
...  

Abstract Background Regular body-powered prosthesis (bp-prosthesis) training often facilitates acquisition of skills through repeated practice but requires adequate time and motivation. Therefore, if there are auxiliary tools, such as indirect training, skill acquisition may be easy. In this study, we examined the effects of action observation (AO) using virtual reality (VR) as an auxiliary tool. We examined two different modalities during AO, VR and tablet device (Tab), and two perspectives, first- and third-person perspectives. This study aimed to examine whether AO training using VR is effective in acquiring bp-prosthetic control skills in the short term. Methods Forty healthy right-handed participants simulated bp-prosthesis with the non-dominant hand. They were divided into five groups with different interventions and displays for AO: first-person perspective on VR (VR1st), third-person perspective on VR (VR3rd), first-person perspective on Tab (Tab1st), third-person perspective on Tab (Tab3rd), and control group (Con) without AO. Participants of VR1st, VR3rd, Tab1st, and Tab3rd observed the video image of experts operating prosthesis twice, 10 min each time. We evaluated the immersion during the video observation using the Visual Analog Scale. Prosthetic control skills were evaluated using the box and block test (BBT) and bowknot task (BKT). Results In BBT, no significant enhancements of prosthetic control skills between groups were found. In contrast, the BKT change rates of prosthetic control skills in VR1st and VR3rd were significantly higher than those in Con (p < 0.001). Additionally, immersion scores of VR1st and VR3rd were higher than those of Tab3rd (p < 0.05), and there was a significant negative correlation between immersion and BKT change rate (Spearman’s rs = -0.47, p < 0.01). Conclusions In BKT (bilateral manual dexterity), VR video viewing led to significantly better short-term prosthetic control acqusition than Con. Additionally, it was suggested that the higher the immersion, the shorter the BKT task execution time. Our findings suggest that VR-based AO training is effective in acquiring bp-prosthetic control in the short term. Especially, it is effective for bilateral prosthetic control, which is necessary in the daily life of upper limb amputees.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Heather Iriye ◽  
Peggy L. St. Jacques

AbstractWe typically experience the world from a first-person perspective (1PP) but can sometimes experience events from a third-person perspective (3PP) much as an observer might see us. Little is known about how visual perspective influences the formation of memories for events. We developed an immersive virtual reality paradigm to examine how visual perspective during encoding influences memories. Across two studies, participants explored immersive virtual environments from first-person and third-person avatar perspectives while wearing an Oculus Rift headset. Memory was tested immediately (Study One and Study Two) and following a one-week delay (Study Two). We assessed the accuracy of visual memory using cued recall questions and spatial memory by asking participants to draw maps of the layout of each environment (Study One and Study Two). Additional phenomenological ratings were included to assess visual perspective during remembering (Study Two). There were no differences in the accuracy of visual information across the two studies, but 3PP experiences were found to increase spatial memory accuracy due to their wider camera field of view when compared to 1PP experiences. Our results also demonstrate that 3PP experiences create 3PP memories, as reflected by an increase in subjective ratings of observer-like perspectives during remembering. In sum, visual perspective during memory formation influences the accuracy of spatial but not visual information, and the vantage point of memories during remembering.


Sign in / Sign up

Export Citation Format

Share Document