scholarly journals Cyberpsychological Approach to the Analysis of Multisensory Integration

2019 ◽  
Vol 27 (3) ◽  
pp. 9-21
Author(s):  
A.E. Voiskounsky

The paper relates to the branch of cyberpsychology associated with risk factors during immersion in a virtual environment. Specialists in the development and operation of virtual reality systems know that immersion into this environment may be accompanied by symptoms similar to the “motion sickness” of transport vehicle passengers (ships, aircraft, cars). In the paper, these conditions are referred to as a cybersickness (or, cyberdisease). The three leading theories, proposed as an explanation of the causes of cybersickness, are discussed: the theory of sensory conflict, the theory of postural instability (the inability to maintain equilibrium), and the evolutionary (aka toxin) theory. A frequent occurrence of symptoms of cybersickness is a conflict between visual signals and signals from the vestibular system. It is shown that such conflicts can be stimulated in the framework of a specially organized experiment (e.g., the illusion of out-of-body experience) using virtual reality systems. When competing signals (visual, auditory, kinesthetic, tactile, etc.) reach the brain, the data gained with the use of virtual reality systems give a chance to hypothetically determine the localization of the specific area in the brain that ensures the integration of multisensory stimuli.


2019 ◽  
Vol 121 (4) ◽  
pp. 1398-1409 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. In this study we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants’ movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object-lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception. NEW & NOTEWORTHY Dexterous hand movements require rapid integration of information from different senses, in particular touch and vision, at different key time points as movement unfolds. The relative weighting between vision and haptics for object manipulation is unknown. We used object lifting in virtual reality to desynchronize visual and haptic feedback and find out their relative weightings. Our findings shed light on how rapid multisensory integration is processed over a series of discrete sensorimotor control points.



2006 ◽  
Vol 29 (5) ◽  
pp. 478-479 ◽  
Author(s):  
David Kemmerer ◽  
Rupa Gupta

During an out-of-body experience (OBE), one sees the world and one's own body from an extracorporeal visuospatial perspective. OBEs reflect disturbances in brain systems dedicated to multisensory integration and self-processing. However, they have traditionally been interpreted as providing evidence for a soul that can depart the body after death. This mystical view is consistent with Bering's proposal that psychological immortality is the cognitive default.



2018 ◽  
Vol 5 (4) ◽  
pp. 346-357 ◽  
Author(s):  
Dalena van Heugten-van der Kloet ◽  
Jan Cosgrave ◽  
Joram van Rheede ◽  
Stephen Hicks




2007 ◽  
Vol 357 (18) ◽  
pp. 1829-1833 ◽  
Author(s):  
Dirk De Ridder ◽  
Koen Van Laere ◽  
Patrick Dupont ◽  
Tomas Menovsky ◽  
Paul Van de Heyning


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e8565
Author(s):  
Sylvie Droit-Volet ◽  
Sophie Monceau ◽  
Michaël Dambrun ◽  
Natalia Martinelli

Using an out-of-body paradigm, the present study provided further empirical evidence for the theory of embodied time by suggesting that the body-self plays a key role in time judgments. Looking through virtual reality glasses, the participants saw the arm of a mannequin instead of their own arm. They had to judge the duration of the interval between two (perceived) touches applied to the mannequin’s body after a series of strokes had been viewed being made to the mannequin and tactile strokes had been administered to the participants themselves. These strokes were administered either synchronously or asynchronously. During the interval, a pleasant (touch with a soft paintbrush) or an unpleasant stimulation (touch with a pointed knife) was applied to the mannequin. The results showed that the participants felt the perceived tactile stimulations in their own bodies more strongly after the synchronous than the asynchronous stroking condition, a finding which is consistent with the out-of-body illusion. In addition, the interval duration was judged longer in the synchronous than in the asynchronous condition. This time distortion increased the greater the individual out-of-body experience was. Our results therefore highlight the importance of the awareness of the body-self in the processing of time, i.e., the significance of embodied time.



2018 ◽  
Author(s):  
Vonne van Polanen ◽  
Robert Tibold ◽  
Atsuo Nuruki ◽  
Marco Davare

Lifting an object requires precise scaling of fingertip forces based on a prediction of object weight. At object contact, a series of tactile and visual events arise that need to be rapidly processed online to fine-tune the planned motor commands for lifting the object. The brain mechanisms underlying multisensory integration serially at transient sensorimotor events, a general feature of actions requiring hand-object interactions, are not yet understood. Here we tested the relative weighting between haptic and visual signals when they are integrated online into the motor command. We used a new virtual reality setup to desynchronize visual feedback from haptics, which allowed us to probe the relative contribution of haptics and vision in driving participants' movements when they grasped virtual objects simulated by two force-feedback robots. We found that visual delay changed the profile of fingertip force generation and led participants to perceive objects as heavier than when lifts were performed without visual delay. We further modeled the effect of vision on motor output by manipulating the extent to which delayed visual events could bias the force profile, which allowed us to determine the specific weighting the brain assigns to haptics and vision. Our results show for the first time how visuo-haptic integration is processed at discrete sensorimotor events for controlling object lifting dynamics and further highlight the organization of multisensory signals online for controlling action and perception.



2018 ◽  
Vol 31 (7) ◽  
pp. 689-713 ◽  
Author(s):  
Hudson Diggs Bailey ◽  
Aidan B. Mullaney ◽  
Kyla D. Gibney ◽  
Leslie Dowell Kwakye

Abstract We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.



Sign in / Sign up

Export Citation Format

Share Document