Multisensory Feedback Can Enhance Embodiment Within an Enriched Virtual Walking Scenario

2014 ◽  
Vol 23 (3) ◽  
pp. 253-266 ◽  
Author(s):  
Daniele Leonardis ◽  
Antonio Frisoli ◽  
Michele Barsotti ◽  
Marcello Carrozzino ◽  
Massimo Bergamasco

This study investigates how the sense of embodiment in virtual environments can be enhanced by multisensory feedback related to body movements. In particular, we analyze the effect of combined vestibular and proprioceptive afferent signals on the perceived embodiment within an immersive walking scenario. These feedback signals were applied by means of a motion platform and by tendon vibration of lower limbs, evoking illusory leg movements. Vestibular and proprioceptive feedback were provided congruently with a rich virtual scenario reconstructing a real city, rendered on a head-mounted display (HMD). The sense of embodiment was evaluated through both self-reported questionnaires and physiological measurements in two experimental conditions: with all active sensory feedback (highly embodied condition), and with visual feedback only. Participants' self-reports show that the addition of both vestibular and proprioceptive feedback increases the sense of embodiment and the individual's feeling of presence associated with the walking experience. Furthermore, the embodiment condition significantly increased the measured galvanic skin response and respiration rate. The obtained results suggest that vestibular and proprioceptive feedback can improve the participant's sense of embodiment in the virtual experience.

2018 ◽  
Vol 31 (5) ◽  
pp. 455-480 ◽  
Author(s):  
Rachel Goodman ◽  
Valentin A. Crainic ◽  
Stephen R. Bested ◽  
Darrin O. Wijeyaratnam ◽  
John de Grosbois ◽  
...  

In order to maximize the precise completion of voluntary actions, humans can theoretically utilize both visual and proprioceptive information to plan and amend ongoing limb trajectories. Although vision has been thought to be a more dominant sensory modality, research has shown that sensory feedback may be processed as a function of its relevance and reliability. As well, theoretical models of voluntary action have suggested that both vision and proprioception can be used to prepare online trajectory amendments. However, empirical evidence regarding the use of proprioception for online control has come from indirect manipulations from the sensory feedback (i.e., without directly perturbing the afferent information; e.g., visual–proprioceptive mismatch). In order to directly assess the relative contributions of visual and proprioceptive feedback to the online control of voluntary actions, direct perturbations to both vision (i.e., liquid crystal goggles) and proprioception (i.e., tendon vibration) were implemented in two experiments. The first experiment employed the manipulations while participants simply performed a rapid goal-directed movement (30 cm amplitude). Results from this first experiment yielded no significant evidence that proprioceptive feedback contributed to online control processes. The second experiment employed an imperceptible target jump to elicit online trajectory amendments. Without or with tendon vibration, participants still corrected for the target jumps. The current study provided more evidence of the importance of vision for online control but little support for the importance of proprioception for online limb–target regulation mechanisms.


2020 ◽  
Vol 4 (s1) ◽  
pp. 97-97
Author(s):  
Robin L Shafer ◽  
Zheng Wang ◽  
Matthew W. Mosconi

OBJECTIVES/GOALS: Sensorimotor integration deficits are common in Autism Spectrum Disorders (ASD). There is evidence for both an over-reliance on visual and proprioceptive feedback during motor control in ASD, suggesting deficits in the ability to modulate sensory feedback processing in order to use the most reliable input. This study aims to test this hypothesis. METHODS/STUDY POPULATION: 40 persons with ASD (ages 10-33 yrs) and 25 age-, sex- and nonverbal IQ-matched controls completed precision gripping tasks under multiple proprioceptive and visual feedback conditions. Participants squeezed a force sensor with their index finger and thumb and tried to match their force output to a target force. Visual feedback of the target force (stationary bar) and their force output (bar that moved up/down with increased/decreased force) were displayed on a computer screen. Visual feedback was presented across low, medium, and high gain levels; the force bar moved a greater distance per change in force at higher gains. Proprioceptive feedback was manipulated using 80Hz tendon vibration at the wrist to create an illusion that the muscle is contracted. Force regularity (approximate entropy; ApEn) was examined. RESULTS/ANTICIPATED RESULTS: We have scored data from 18 participants with ASD and 13 control participants to date. Preliminary results from these participants indicate a Group x Tendon Vibration x Visual Gain interaction for ApEn (F = 1.559, p = 0.023). Individuals with ASD show slight increases in ApEn with 80Hz tendon vibration relative to no tendon vibration in all visual conditions. Controls showed increased ApEn during 80Hz compared to no tendon vibration at low visual gain but decreased ApEn with tendon vibration at high visual gain. These preliminary results indicate that controls shift to using a secondary source of sensory feedback (e.g., proprioception) when the primary source (e.g., vision) is degraded. However, persons with ASD do not reweight different sensory feedback processes as feedback inputs are degraded or magnified. DISCUSSION/SIGNIFICANCE OF IMPACT: Our preliminary results reveal that sensorimotor issues in ASD result from deficits in the reweighting of sensory feedback. Namely, persons with ASD fail to dynamically recalibrate feedback processes across visual and proprioceptive systems when feedback conditions change. Our results may aid treatment development for sensorimotor issues in ASD.


1999 ◽  
Vol 13 (4) ◽  
pp. 234-244
Author(s):  
Uwe Niederberger ◽  
Wolf-Dieter Gerber

Abstract In two experiments with four and two groups of healthy subjects, a novel motor task, the voluntary abduction of the right big toe, was trained. This task cannot usually be performed without training and is therefore ideal for the study of elementary motor learning. A systematic variation of proprioceptive, tactile, visual, and EMG feedback was used. In addition to peripheral measurements such as the voluntary range of motion and EMG output during training, a three-channel EEG was recorded over Cz, C3, and C4. The movement-related brain potential during distinct periods of the training was analyzed as a central nervous parameter of the ongoing learning process. In experiment I, we randomized four groups of 12 subjects each (group P: proprioceptive feedback; group PT: proprioceptive and tactile feedback; group PTV: proprioceptive, tactile, and visual feedback; group PTEMG: proprioceptive, tactile, and EMG feedback). Best training results were reported from the PTEMG and PTV groups. The movement-preceding cortical activity, in the form of the amplitude of the readiness potential at the time of EMG onset, was greatest in these two groups. Results of experiment II revealed a similar effect, with a greater training success and a higher electrocortical activation under additional EMG feedback compared to proprioceptive feedback alone. Sensory EMG feedback as evaluated by peripheral and central nervous measurements appears to be useful in motor training and neuromuscular re-education.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Doerte Kuhrt ◽  
Natalie R. St. John ◽  
Jacob L. S. Bellmund ◽  
Raphael Kaplan ◽  
Christian F. Doeller

AbstractAdvances in virtual reality (VR) technology have greatly benefited spatial navigation research. By presenting space in a controlled manner, changing aspects of the environment one at a time or manipulating the gain from different sensory inputs, the mechanisms underlying spatial behaviour can be investigated. In parallel, a growing body of evidence suggests that the processes involved in spatial navigation extend to non-spatial domains. Here, we leverage VR technology advances to test whether participants can navigate abstract knowledge. We designed a two-dimensional quantity space—presented using a head-mounted display—to test if participants can navigate abstract knowledge using a first-person perspective navigation paradigm. To investigate the effect of physical movement, we divided participants into two groups: one walking and rotating on a motion platform, the other group using a gamepad to move through the abstract space. We found that both groups learned to navigate using a first-person perspective and formed accurate representations of the abstract space. Interestingly, navigation in the quantity space resembled behavioural patterns observed in navigation studies using environments with natural visuospatial cues. Notably, both groups demonstrated similar patterns of learning. Taken together, these results imply that both self-movement and remote exploration can be used to learn the relational mapping between abstract stimuli.


2019 ◽  
Vol 01 (01) ◽  
pp. 24-34 ◽  
Author(s):  
Smys S ◽  
Jennifer S. Raj ◽  
Krishna raj N.

Virtual reality (VR) technology has the potential to make a person experience anything, anytime, anywhere. It has the ability to influence the human brain that it assumes to be present somewhere that it is really not. In this paper, we exploit this application of the VR technology to simulate virtual environments that can help with PTSD therapy for people affected by trauma due to accident, war, sexual abuse and so on. Several sensors are used to gather the user movements on a motion platform and replicate it in the virtual environment with the help of a Raspberry Pi board and Unreal Developer’s kit. It has flexible interfaces that the clinician can modify the virtual environment according to the requirement for the patient.


Author(s):  
Luma Tabbaa ◽  
Ryan Searle ◽  
Saber Mirzaee Bafti ◽  
Md Moinul Hossain ◽  
Jittrapol Intarasisrisawat ◽  
...  

The paper introduces a multimodal affective dataset named VREED (VR Eyes: Emotions Dataset) in which emotions were triggered using immersive 360° Video-Based Virtual Environments (360-VEs) delivered via Virtual Reality (VR) headset. Behavioural (eye tracking) and physiological signals (Electrocardiogram (ECG) and Galvanic Skin Response (GSR)) were captured, together with self-reported responses, from healthy participants (n=34) experiencing 360-VEs (n=12, 1--3 min each) selected through focus groups and a pilot trial. Statistical analysis confirmed the validity of the selected 360-VEs in eliciting the desired emotions. Preliminary machine learning analysis was carried out, demonstrating state-of-the-art performance reported in affective computing literature using non-immersive modalities. VREED is among the first multimodal VR datasets in emotion recognition using behavioural and physiological signals. VREED is made publicly available on Kaggle1. We hope that this contribution encourages other researchers to utilise VREED further to understand emotional responses in VR and ultimately enhance VR experiences design in applications where emotional elicitation plays a key role, i.e. healthcare, gaming, education, etc.


Author(s):  
David E. Kancler ◽  
Laurie L. Quill

This study investigates the effects of ocular dominance when maintenance procedures are presented on a monocular, occluding head-mounted display (HMD). While previous research has not revealed significant effects associated with ocular dominance and the use of a monocular, occluding HMD, most of this research has occurred in the cockpit environment. By nature, this setting involves continually changing (or dynamic) environmental information, such as target location or altitude. By contrast, the aircraft maintenance environment is static; the technician is not required to process dynamic environmental information. As the Air Force studies the feasibility of presenting maintenance procedures on HMDs, research efforts must thoroughly address questions pertaining to the use of these devices, such as potential effects of ocular dominance. The current study addresses the effect of ocular dominance on performance times, subjective workload ratings, self reports, and preference rankings. Consistent with previous research, ocular dominance did not have a significant effect on any of the dependent measures. However, order of presentation (dominant eye before non-dominant eye vs. dominant eye after non-dominant eye) did provide some differences in performance times and workload scores. Explanations for these differences are discussed.


2015 ◽  
Vol 21 (2) ◽  
pp. 122-126 ◽  
Author(s):  
Ravena Santos Raulino ◽  
Fernanda Meira de Aguiar ◽  
Núbia Carelli Pereira de Avelar ◽  
Isabela Gomes Costa ◽  
Jacqueline da Silva Soares ◽  
...  

INTRODUCTION AND OBJECTIVE: the aim of this study was to investigate whether the addition of vibration during interval training would raise oxygen consumption VO2 to the extent necessary for weight management and to evaluate the influence of the intensity of the vibratory stimulus for prescribing the exercise program in question.METHODS: VO2, measured breath by breath, was evaluated at rest and during the four experimental conditions to determine energy expenditure, metabolic equivalent MET, respiratory exchange ratio RER, % Kcal from fat, and rate of fat oxidation. Eight young sedentary females age 22±1 years, height 163.88± 7.62 cm, body mass 58.35±10.96 kg, and VO2 max 32.75±3.55 mLO2.Kg-1.min-1 performed interval training duration = 13.3 min to the upper and lower limbs both with vibration 35 Hz and 2 mm, 40 Hz and 2 mm, 45 Hz and 2 mm and without vibration. The experimental conditions were randomized and balanced at an interval of 48 hours.RESULTS: the addition of vibration to exercise at 45 Hz and 2 mm resulted in an additional increase of 17.77±12.38% of VO2 compared with exercise without vibration. However, this increase did not change the fat oxidation rate p=0.42 because intensity of exercise 29.1±3.3 %VO2max, 2.7 MET was classified as mild to young subjects.CONCLUSION: despite the influence of vibration on VO2 during exercise, the increase was insufficient to reduce body weight and did not reach the minimum recommendation of exercise prescription for weight management for the studied population.


2015 ◽  
Vol 114 (4) ◽  
pp. 2220-2229 ◽  
Author(s):  
Devin C. Roden-Reynolds ◽  
Megan H. Walker ◽  
Camille R. Wasserman ◽  
Jesse C. Dean

Active control of the mediolateral location of the feet is an important component of a stable bipedal walking pattern, although the roles of sensory feedback in this process are unclear. In the present experiments, we tested whether hip abductor proprioception influenced the control of mediolateral gait motion. Participants performed a series of quiet standing and treadmill walking trials. In some trials, 80-Hz vibration was applied intermittently over the right gluteus medius (GM) to evoke artificial proprioceptive feedback. During walking, the GM was vibrated during either right leg stance (to elicit a perception that the pelvis was closer mediolaterally to the stance foot) or swing (to elicit a perception that the swing leg was more adducted). Vibration during quiet standing evoked leftward sway in most participants (13 of 16), as expected from its predicted perceptual effects. Across the 13 participants sensitive to vibration, stance phase vibration caused the contralateral leg to be placed significantly closer to the midline (by ∼2 mm) at the end of the ongoing step. In contrast, swing phase vibration caused the vibrated leg to be placed significantly farther mediolaterally from the midline (by ∼2 mm), whereas the pelvis was held closer to the stance foot (by ∼1 mm). The estimated mediolateral margin of stability was thus decreased by stance phase vibration but increased by swing phase vibration. Although the observed effects of vibration were small, they were consistent with humans monitoring hip proprioceptive feedback while walking to maintain stable mediolateral gait motion.


2015 ◽  
Vol 113 (6) ◽  
pp. 1772-1783 ◽  
Author(s):  
Julien Bacqué-Cazenave ◽  
Bryce Chung ◽  
David W. Cofer ◽  
Daniel Cattaert ◽  
Donald H. Edwards

Neuromechanical simulation was used to determine whether proposed thoracic circuit mechanisms for the control of leg elevation and depression in crayfish could account for the responses of an experimental hybrid neuromechanical preparation when the proprioceptive feedback loop was open and closed. The hybrid neuromechanical preparation consisted of a computational model of the fifth crayfish leg driven in real time by the experimentally recorded activity of the levator and depressor (Lev/Dep) nerves of an in vitro preparation of the crayfish thoracic nerve cord. Up and down movements of the model leg evoked by motor nerve activity released and stretched the model coxobasal chordotonal organ (CBCO); variations in the CBCO length were used to drive identical variations in the length of the live CBCO in the in vitro preparation. CBCO afferent responses provided proprioceptive feedback to affect the thoracic motor output. Experiments performed with this hybrid neuromechanical preparation were simulated with a neuromechanical model in which a computational circuit model represented the relevant thoracic circuitry. Model simulations were able to reproduce the hybrid neuromechanical experimental results to show that proposed circuit mechanisms with sensory feedback could account for resistance reflexes displayed in the quiescent state and for reflex reversal and spontaneous Lev/Dep bursting seen in the active state.


Sign in / Sign up

Export Citation Format

Share Document